Experimental Design: Types, Examples & Methods

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Experimental design refers to how participants are allocated to different groups in an experiment. Types of design include repeated measures, independent groups, and matched pairs designs.

Probably the most common way to design an experiment in psychology is to divide the participants into two groups, the experimental group and the control group, and then introduce a change to the experimental group, not the control group.

The researcher must decide how he/she will allocate their sample to the different experimental groups.  For example, if there are 10 participants, will all 10 participants participate in both groups (e.g., repeated measures), or will the participants be split in half and take part in only one group each?

Three types of experimental designs are commonly used:

1. Independent Measures

Independent measures design, also known as between-groups , is an experimental design where different participants are used in each condition of the independent variable.  This means that each condition of the experiment includes a different group of participants.

This should be done by random allocation, ensuring that each participant has an equal chance of being assigned to one group.

Independent measures involve using two separate groups of participants, one in each condition. For example:

Independent Measures Design 2

  • Con : More people are needed than with the repeated measures design (i.e., more time-consuming).
  • Pro : Avoids order effects (such as practice or fatigue) as people participate in one condition only.  If a person is involved in several conditions, they may become bored, tired, and fed up by the time they come to the second condition or become wise to the requirements of the experiment!
  • Con : Differences between participants in the groups may affect results, for example, variations in age, gender, or social background.  These differences are known as participant variables (i.e., a type of extraneous variable ).
  • Control : After the participants have been recruited, they should be randomly assigned to their groups. This should ensure the groups are similar, on average (reducing participant variables).

2. Repeated Measures Design

Repeated Measures design is an experimental design where the same participants participate in each independent variable condition.  This means that each experiment condition includes the same group of participants.

Repeated Measures design is also known as within-groups or within-subjects design .

  • Pro : As the same participants are used in each condition, participant variables (i.e., individual differences) are reduced.
  • Con : There may be order effects. Order effects refer to the order of the conditions affecting the participants’ behavior.  Performance in the second condition may be better because the participants know what to do (i.e., practice effect).  Or their performance might be worse in the second condition because they are tired (i.e., fatigue effect). This limitation can be controlled using counterbalancing.
  • Pro : Fewer people are needed as they participate in all conditions (i.e., saves time).
  • Control : To combat order effects, the researcher counter-balances the order of the conditions for the participants.  Alternating the order in which participants perform in different conditions of an experiment.

Counterbalancing

Suppose we used a repeated measures design in which all of the participants first learned words in “loud noise” and then learned them in “no noise.”

We expect the participants to learn better in “no noise” because of order effects, such as practice. However, a researcher can control for order effects using counterbalancing.

The sample would be split into two groups: experimental (A) and control (B).  For example, group 1 does ‘A’ then ‘B,’ and group 2 does ‘B’ then ‘A.’ This is to eliminate order effects.

Although order effects occur for each participant, they balance each other out in the results because they occur equally in both groups.

counter balancing

3. Matched Pairs Design

A matched pairs design is an experimental design where pairs of participants are matched in terms of key variables, such as age or socioeconomic status. One member of each pair is then placed into the experimental group and the other member into the control group .

One member of each matched pair must be randomly assigned to the experimental group and the other to the control group.

matched pairs design

  • Con : If one participant drops out, you lose 2 PPs’ data.
  • Pro : Reduces participant variables because the researcher has tried to pair up the participants so that each condition has people with similar abilities and characteristics.
  • Con : Very time-consuming trying to find closely matched pairs.
  • Pro : It avoids order effects, so counterbalancing is not necessary.
  • Con : Impossible to match people exactly unless they are identical twins!
  • Control : Members of each pair should be randomly assigned to conditions. However, this does not solve all these problems.

Experimental design refers to how participants are allocated to an experiment’s different conditions (or IV levels). There are three types:

1. Independent measures / between-groups : Different participants are used in each condition of the independent variable.

2. Repeated measures /within groups : The same participants take part in each condition of the independent variable.

3. Matched pairs : Each condition uses different participants, but they are matched in terms of important characteristics, e.g., gender, age, intelligence, etc.

Learning Check

Read about each of the experiments below. For each experiment, identify (1) which experimental design was used; and (2) why the researcher might have used that design.

1 . To compare the effectiveness of two different types of therapy for depression, depressed patients were assigned to receive either cognitive therapy or behavior therapy for a 12-week period.

The researchers attempted to ensure that the patients in the two groups had similar severity of depressed symptoms by administering a standardized test of depression to each participant, then pairing them according to the severity of their symptoms.

2 . To assess the difference in reading comprehension between 7 and 9-year-olds, a researcher recruited each group from a local primary school. They were given the same passage of text to read and then asked a series of questions to assess their understanding.

3 . To assess the effectiveness of two different ways of teaching reading, a group of 5-year-olds was recruited from a primary school. Their level of reading ability was assessed, and then they were taught using scheme one for 20 weeks.

At the end of this period, their reading was reassessed, and a reading improvement score was calculated. They were then taught using scheme two for a further 20 weeks, and another reading improvement score for this period was calculated. The reading improvement scores for each child were then compared.

4 . To assess the effect of the organization on recall, a researcher randomly assigned student volunteers to two conditions.

Condition one attempted to recall a list of words that were organized into meaningful categories; condition two attempted to recall the same words, randomly grouped on the page.

Experiment Terminology

Ecological validity.

The degree to which an investigation represents real-life experiences.

Experimenter effects

These are the ways that the experimenter can accidentally influence the participant through their appearance or behavior.

Demand characteristics

The clues in an experiment lead the participants to think they know what the researcher is looking for (e.g., the experimenter’s body language).

Independent variable (IV)

The variable the experimenter manipulates (i.e., changes) is assumed to have a direct effect on the dependent variable.

Dependent variable (DV)

Variable the experimenter measures. This is the outcome (i.e., the result) of a study.

Extraneous variables (EV)

All variables which are not independent variables but could affect the results (DV) of the experiment. Extraneous variables should be controlled where possible.

Confounding variables

Variable(s) that have affected the results (DV), apart from the IV. A confounding variable could be an extraneous variable that has not been controlled.

Random Allocation

Randomly allocating participants to independent variable conditions means that all participants should have an equal chance of taking part in each condition.

The principle of random allocation is to avoid bias in how the experiment is carried out and limit the effects of participant variables.

Order effects

Changes in participants’ performance due to their repeating the same or similar test more than once. Examples of order effects include:

(i) practice effect: an improvement in performance on a task due to repetition, for example, because of familiarity with the task;

(ii) fatigue effect: a decrease in performance of a task due to repetition, for example, because of boredom or tiredness.

Print Friendly, PDF & Email

Psychological Experimental Design

  • Living reference work entry
  • First Online: 15 February 2024
  • Cite this living reference work entry

what is experimental research in psychology

  • Zhang Houcan 2 &
  • He Dongjun 3  

Psychological experimental design refers to the experimental design and methodological approaches devised by researchers before conducting an experiment based on the research objectives. It can be broadly or narrowly defined. Broadly, psychological experimental design refers to the general procedure of scientific research, including problem formulation, hypothesis development, selection of variables, manipulation, and control, as well as statistical analysis of results and paper writing, among other series of activities. Narrowly, psychological experimental design refers to the specific experimental plan or model that researchers develop for arranging variables and procedures, along with the related statistical analysis. The main components of psychological experimental design include how to reasonably arrange the experimental procedures and how to perform statistical analysis on the experimental data. The main steps can be summarized as follows: (1) formulate hypotheses based on...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Further Reading

Kantowitz BH, Roediger HL, Elmes DG (2015) Experimental psychology, 10th edn. Cengage Learning, Boston

Google Scholar  

Zhang X-M, Hua S (2014) Experimental psychology. Beijing Normal University Publishing Group, Beijing

Download references

Author information

Authors and affiliations.

Faculty of Psychology, Beijing Normal University, Beijing, China

Zhang Houcan

School of Psychology, Chengdu Medical University, Chengdu, China

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to He Dongjun .

Rights and permissions

Reprints and permissions

Copyright information

© 2024 Encyclopedia of China Publishing House

About this entry

Cite this entry.

Houcan, Z., Dongjun, H. (2024). Psychological Experimental Design. In: The ECPH Encyclopedia of Psychology. Springer, Singapore. https://doi.org/10.1007/978-981-99-6000-2_490-1

Download citation

DOI : https://doi.org/10.1007/978-981-99-6000-2_490-1

Received : 04 January 2024

Accepted : 05 January 2024

Published : 15 February 2024

Publisher Name : Springer, Singapore

Print ISBN : 978-981-99-6000-2

Online ISBN : 978-981-99-6000-2

eBook Packages : Springer Reference Behavioral Science and Psychology Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

6.2 Experimental Design

Learning objectives.

  • Explain the difference between between-subjects and within-subjects experiments, list some of the pros and cons of each approach, and decide which approach to use to answer a particular research question.
  • Define random assignment, distinguish it from random sampling, explain its purpose in experimental research, and use some simple strategies to implement it.
  • Define what a control condition is, explain its purpose in research on treatment effectiveness, and describe some alternative types of control conditions.
  • Define several types of carryover effect, give examples of each, and explain how counterbalancing helps to deal with them.

In this section, we look at some different ways to design an experiment. The primary distinction we will make is between approaches in which each participant experiences one level of the independent variable and approaches in which each participant experiences all levels of the independent variable. The former are called between-subjects experiments and the latter are called within-subjects experiments.

Between-Subjects Experiments

In a between-subjects experiment , each participant is tested in only one condition. For example, a researcher with a sample of 100 college students might assign half of them to write about a traumatic event and the other half write about a neutral event. Or a researcher with a sample of 60 people with severe agoraphobia (fear of open spaces) might assign 20 of them to receive each of three different treatments for that disorder. It is essential in a between-subjects experiment that the researcher assign participants to conditions so that the different groups are, on average, highly similar to each other. Those in a trauma condition and a neutral condition, for example, should include a similar proportion of men and women, and they should have similar average intelligence quotients (IQs), similar average levels of motivation, similar average numbers of health problems, and so on. This is a matter of controlling these extraneous participant variables across conditions so that they do not become confounding variables.

Random Assignment

The primary way that researchers accomplish this kind of control of extraneous variables across conditions is called random assignment , which means using a random process to decide which participants are tested in which conditions. Do not confuse random assignment with random sampling. Random sampling is a method for selecting a sample from a population, and it is rarely used in psychological research. Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too.

In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition (e.g., a 50% chance of being assigned to each of two conditions). The second is that each participant is assigned to a condition independently of other participants. Thus one way to assign participants to two conditions would be to flip a coin for each one. If the coin lands heads, the participant is assigned to Condition A, and if it lands tails, the participant is assigned to Condition B. For three conditions, one could use a computer to generate a random integer from 1 to 3 for each participant. If the integer is 1, the participant is assigned to Condition A; if it is 2, the participant is assigned to Condition B; and if it is 3, the participant is assigned to Condition C. In practice, a full sequence of conditions—one for each participant expected to be in the experiment—is usually created ahead of time, and each new participant is assigned to the next condition in the sequence as he or she is tested. When the procedure is computerized, the computer program often handles the random assignment.

One problem with coin flipping and other strict procedures for random assignment is that they are likely to result in unequal sample sizes in the different conditions. Unequal sample sizes are generally not a serious problem, and you should never throw away data you have already collected to achieve equal sample sizes. However, for a fixed number of participants, it is statistically most efficient to divide them into equal-sized groups. It is standard practice, therefore, to use a kind of modified random assignment that keeps the number of participants in each group as similar as possible. One approach is block randomization . In block randomization, all the conditions occur once in the sequence before any of them is repeated. Then they all occur again before any of them is repeated again. Within each of these “blocks,” the conditions occur in a random order. Again, the sequence of conditions is usually generated before any participants are tested, and each new participant is assigned to the next condition in the sequence. Table 6.2 “Block Randomization Sequence for Assigning Nine Participants to Three Conditions” shows such a sequence for assigning nine participants to three conditions. The Research Randomizer website ( http://www.randomizer.org ) will generate block randomization sequences for any number of participants and conditions. Again, when the procedure is computerized, the computer program often handles the block randomization.

Table 6.2 Block Randomization Sequence for Assigning Nine Participants to Three Conditions

Random assignment is not guaranteed to control all extraneous variables across conditions. It is always possible that just by chance, the participants in one condition might turn out to be substantially older, less tired, more motivated, or less depressed on average than the participants in another condition. However, there are some reasons that this is not a major concern. One is that random assignment works better than one might expect, especially for large samples. Another is that the inferential statistics that researchers use to decide whether a difference between groups reflects a difference in the population takes the “fallibility” of random assignment into account. Yet another reason is that even if random assignment does result in a confounding variable and therefore produces misleading results, this is likely to be detected when the experiment is replicated. The upshot is that random assignment to conditions—although not infallible in terms of controlling extraneous variables—is always considered a strength of a research design.

Treatment and Control Conditions

Between-subjects experiments are often used to determine whether a treatment works. In psychological research, a treatment is any intervention meant to change people’s behavior for the better. This includes psychotherapies and medical treatments for psychological disorders but also interventions designed to improve learning, promote conservation, reduce prejudice, and so on. To determine whether a treatment works, participants are randomly assigned to either a treatment condition , in which they receive the treatment, or a control condition , in which they do not receive the treatment. If participants in the treatment condition end up better off than participants in the control condition—for example, they are less depressed, learn faster, conserve more, express less prejudice—then the researcher can conclude that the treatment works. In research on the effectiveness of psychotherapies and medical treatments, this type of experiment is often called a randomized clinical trial .

There are different types of control conditions. In a no-treatment control condition , participants receive no treatment whatsoever. One problem with this approach, however, is the existence of placebo effects. A placebo is a simulated treatment that lacks any active ingredient or element that should make it effective, and a placebo effect is a positive effect of such a treatment. Many folk remedies that seem to work—such as eating chicken soup for a cold or placing soap under the bedsheets to stop nighttime leg cramps—are probably nothing more than placebos. Although placebo effects are not well understood, they are probably driven primarily by people’s expectations that they will improve. Having the expectation to improve can result in reduced stress, anxiety, and depression, which can alter perceptions and even improve immune system functioning (Price, Finniss, & Benedetti, 2008).

Placebo effects are interesting in their own right (see Note 6.28 “The Powerful Placebo” ), but they also pose a serious problem for researchers who want to determine whether a treatment works. Figure 6.2 “Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions” shows some hypothetical results in which participants in a treatment condition improved more on average than participants in a no-treatment control condition. If these conditions (the two leftmost bars in Figure 6.2 “Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions” ) were the only conditions in this experiment, however, one could not conclude that the treatment worked. It could be instead that participants in the treatment group improved more because they expected to improve, while those in the no-treatment control condition did not.

Figure 6.2 Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions

Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions

Fortunately, there are several solutions to this problem. One is to include a placebo control condition , in which participants receive a placebo that looks much like the treatment but lacks the active ingredient or element thought to be responsible for the treatment’s effectiveness. When participants in a treatment condition take a pill, for example, then those in a placebo control condition would take an identical-looking pill that lacks the active ingredient in the treatment (a “sugar pill”). In research on psychotherapy effectiveness, the placebo might involve going to a psychotherapist and talking in an unstructured way about one’s problems. The idea is that if participants in both the treatment and the placebo control groups expect to improve, then any improvement in the treatment group over and above that in the placebo control group must have been caused by the treatment and not by participants’ expectations. This is what is shown by a comparison of the two outer bars in Figure 6.2 “Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions” .

Of course, the principle of informed consent requires that participants be told that they will be assigned to either a treatment or a placebo control condition—even though they cannot be told which until the experiment ends. In many cases the participants who had been in the control condition are then offered an opportunity to have the real treatment. An alternative approach is to use a waitlist control condition , in which participants are told that they will receive the treatment but must wait until the participants in the treatment condition have already received it. This allows researchers to compare participants who have received the treatment with participants who are not currently receiving it but who still expect to improve (eventually). A final solution to the problem of placebo effects is to leave out the control condition completely and compare any new treatment with the best available alternative treatment. For example, a new treatment for simple phobia could be compared with standard exposure therapy. Because participants in both conditions receive a treatment, their expectations about improvement should be similar. This approach also makes sense because once there is an effective treatment, the interesting question about a new treatment is not simply “Does it work?” but “Does it work better than what is already available?”

The Powerful Placebo

Many people are not surprised that placebos can have a positive effect on disorders that seem fundamentally psychological, including depression, anxiety, and insomnia. However, placebos can also have a positive effect on disorders that most people think of as fundamentally physiological. These include asthma, ulcers, and warts (Shapiro & Shapiro, 1999). There is even evidence that placebo surgery—also called “sham surgery”—can be as effective as actual surgery.

Medical researcher J. Bruce Moseley and his colleagues conducted a study on the effectiveness of two arthroscopic surgery procedures for osteoarthritis of the knee (Moseley et al., 2002). The control participants in this study were prepped for surgery, received a tranquilizer, and even received three small incisions in their knees. But they did not receive the actual arthroscopic surgical procedure. The surprising result was that all participants improved in terms of both knee pain and function, and the sham surgery group improved just as much as the treatment groups. According to the researchers, “This study provides strong evidence that arthroscopic lavage with or without débridement [the surgical procedures used] is not better than and appears to be equivalent to a placebo procedure in improving knee pain and self-reported function” (p. 85).

Doctors treating a patient in Surgery

Research has shown that patients with osteoarthritis of the knee who receive a “sham surgery” experience reductions in pain and improvement in knee function similar to those of patients who receive a real surgery.

Army Medicine – Surgery – CC BY 2.0.

Within-Subjects Experiments

In a within-subjects experiment , each participant is tested under all conditions. Consider an experiment on the effect of a defendant’s physical attractiveness on judgments of his guilt. Again, in a between-subjects experiment, one group of participants would be shown an attractive defendant and asked to judge his guilt, and another group of participants would be shown an unattractive defendant and asked to judge his guilt. In a within-subjects experiment, however, the same group of participants would judge the guilt of both an attractive and an unattractive defendant.

The primary advantage of this approach is that it provides maximum control of extraneous participant variables. Participants in all conditions have the same mean IQ, same socioeconomic status, same number of siblings, and so on—because they are the very same people. Within-subjects experiments also make it possible to use statistical procedures that remove the effect of these extraneous participant variables on the dependent variable and therefore make the data less “noisy” and the effect of the independent variable easier to detect. We will look more closely at this idea later in the book.

Carryover Effects and Counterbalancing

The primary disadvantage of within-subjects designs is that they can result in carryover effects. A carryover effect is an effect of being tested in one condition on participants’ behavior in later conditions. One type of carryover effect is a practice effect , where participants perform a task better in later conditions because they have had a chance to practice it. Another type is a fatigue effect , where participants perform a task worse in later conditions because they become tired or bored. Being tested in one condition can also change how participants perceive stimuli or interpret their task in later conditions. This is called a context effect . For example, an average-looking defendant might be judged more harshly when participants have just judged an attractive defendant than when they have just judged an unattractive defendant. Within-subjects experiments also make it easier for participants to guess the hypothesis. For example, a participant who is asked to judge the guilt of an attractive defendant and then is asked to judge the guilt of an unattractive defendant is likely to guess that the hypothesis is that defendant attractiveness affects judgments of guilt. This could lead the participant to judge the unattractive defendant more harshly because he thinks this is what he is expected to do. Or it could make participants judge the two defendants similarly in an effort to be “fair.”

Carryover effects can be interesting in their own right. (Does the attractiveness of one person depend on the attractiveness of other people that we have seen recently?) But when they are not the focus of the research, carryover effects can be problematic. Imagine, for example, that participants judge the guilt of an attractive defendant and then judge the guilt of an unattractive defendant. If they judge the unattractive defendant more harshly, this might be because of his unattractiveness. But it could be instead that they judge him more harshly because they are becoming bored or tired. In other words, the order of the conditions is a confounding variable. The attractive condition is always the first condition and the unattractive condition the second. Thus any difference between the conditions in terms of the dependent variable could be caused by the order of the conditions and not the independent variable itself.

There is a solution to the problem of order effects, however, that can be used in many situations. It is counterbalancing , which means testing different participants in different orders. For example, some participants would be tested in the attractive defendant condition followed by the unattractive defendant condition, and others would be tested in the unattractive condition followed by the attractive condition. With three conditions, there would be six different orders (ABC, ACB, BAC, BCA, CAB, and CBA), so some participants would be tested in each of the six orders. With counterbalancing, participants are assigned to orders randomly, using the techniques we have already discussed. Thus random assignment plays an important role in within-subjects designs just as in between-subjects designs. Here, instead of randomly assigning to conditions, they are randomly assigned to different orders of conditions. In fact, it can safely be said that if a study does not involve random assignment in one form or another, it is not an experiment.

There are two ways to think about what counterbalancing accomplishes. One is that it controls the order of conditions so that it is no longer a confounding variable. Instead of the attractive condition always being first and the unattractive condition always being second, the attractive condition comes first for some participants and second for others. Likewise, the unattractive condition comes first for some participants and second for others. Thus any overall difference in the dependent variable between the two conditions cannot have been caused by the order of conditions. A second way to think about what counterbalancing accomplishes is that if there are carryover effects, it makes it possible to detect them. One can analyze the data separately for each order to see whether it had an effect.

When 9 Is “Larger” Than 221

Researcher Michael Birnbaum has argued that the lack of context provided by between-subjects designs is often a bigger problem than the context effects created by within-subjects designs. To demonstrate this, he asked one group of participants to rate how large the number 9 was on a 1-to-10 rating scale and another group to rate how large the number 221 was on the same 1-to-10 rating scale (Birnbaum, 1999). Participants in this between-subjects design gave the number 9 a mean rating of 5.13 and the number 221 a mean rating of 3.10. In other words, they rated 9 as larger than 221! According to Birnbaum, this is because participants spontaneously compared 9 with other one-digit numbers (in which case it is relatively large) and compared 221 with other three-digit numbers (in which case it is relatively small).

Simultaneous Within-Subjects Designs

So far, we have discussed an approach to within-subjects designs in which participants are tested in one condition at a time. There is another approach, however, that is often used when participants make multiple responses in each condition. Imagine, for example, that participants judge the guilt of 10 attractive defendants and 10 unattractive defendants. Instead of having people make judgments about all 10 defendants of one type followed by all 10 defendants of the other type, the researcher could present all 20 defendants in a sequence that mixed the two types. The researcher could then compute each participant’s mean rating for each type of defendant. Or imagine an experiment designed to see whether people with social anxiety disorder remember negative adjectives (e.g., “stupid,” “incompetent”) better than positive ones (e.g., “happy,” “productive”). The researcher could have participants study a single list that includes both kinds of words and then have them try to recall as many words as possible. The researcher could then count the number of each type of word that was recalled. There are many ways to determine the order in which the stimuli are presented, but one common way is to generate a different random order for each participant.

Between-Subjects or Within-Subjects?

Almost every experiment can be conducted using either a between-subjects design or a within-subjects design. This means that researchers must choose between the two approaches based on their relative merits for the particular situation.

Between-subjects experiments have the advantage of being conceptually simpler and requiring less testing time per participant. They also avoid carryover effects without the need for counterbalancing. Within-subjects experiments have the advantage of controlling extraneous participant variables, which generally reduces noise in the data and makes it easier to detect a relationship between the independent and dependent variables.

A good rule of thumb, then, is that if it is possible to conduct a within-subjects experiment (with proper counterbalancing) in the time that is available per participant—and you have no serious concerns about carryover effects—this is probably the best option. If a within-subjects design would be difficult or impossible to carry out, then you should consider a between-subjects design instead. For example, if you were testing participants in a doctor’s waiting room or shoppers in line at a grocery store, you might not have enough time to test each participant in all conditions and therefore would opt for a between-subjects design. Or imagine you were trying to reduce people’s level of prejudice by having them interact with someone of another race. A within-subjects design with counterbalancing would require testing some participants in the treatment condition first and then in a control condition. But if the treatment works and reduces people’s level of prejudice, then they would no longer be suitable for testing in the control condition. This is true for many designs that involve a treatment meant to produce long-term change in participants’ behavior (e.g., studies testing the effectiveness of psychotherapy). Clearly, a between-subjects design would be necessary here.

Remember also that using one type of design does not preclude using the other type in a different study. There is no reason that a researcher could not use both a between-subjects design and a within-subjects design to answer the same research question. In fact, professional researchers often do exactly this.

Key Takeaways

  • Experiments can be conducted using either between-subjects or within-subjects designs. Deciding which to use in a particular situation requires careful consideration of the pros and cons of each approach.
  • Random assignment to conditions in between-subjects experiments or to orders of conditions in within-subjects experiments is a fundamental element of experimental research. Its purpose is to control extraneous variables so that they do not become confounding variables.
  • Experimental research on the effectiveness of a treatment requires both a treatment condition and a control condition, which can be a no-treatment control condition, a placebo control condition, or a waitlist control condition. Experimental treatments can also be compared with the best available alternative.

Discussion: For each of the following topics, list the pros and cons of a between-subjects and within-subjects design and decide which would be better.

  • You want to test the relative effectiveness of two training programs for running a marathon.
  • Using photographs of people as stimuli, you want to see if smiling people are perceived as more intelligent than people who are not smiling.
  • In a field experiment, you want to see if the way a panhandler is dressed (neatly vs. sloppily) affects whether or not passersby give him any money.
  • You want to see if concrete nouns (e.g., dog ) are recalled better than abstract nouns (e.g., truth ).
  • Discussion: Imagine that an experiment shows that participants who receive psychodynamic therapy for a dog phobia improve more than participants in a no-treatment control group. Explain a fundamental problem with this research design and at least two ways that it might be corrected.

Birnbaum, M. H. (1999). How to show that 9 > 221: Collect judgments in a between-subjects design. Psychological Methods, 4 , 243–249.

Moseley, J. B., O’Malley, K., Petersen, N. J., Menke, T. J., Brody, B. A., Kuykendall, D. H., … Wray, N. P. (2002). A controlled trial of arthroscopic surgery for osteoarthritis of the knee. The New England Journal of Medicine, 347 , 81–88.

Price, D. D., Finniss, D. G., & Benedetti, F. (2008). A comprehensive review of the placebo effect: Recent advances and current thought. Annual Review of Psychology, 59 , 565–590.

Shapiro, A. K., & Shapiro, E. (1999). The powerful placebo: From ancient priest to modern physician . Baltimore, MD: Johns Hopkins University Press.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Logo for Kwantlen Polytechnic University

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Experimental Research

23 Experiment Basics

Learning objectives.

  • Explain what an experiment is and recognize examples of studies that are experiments and studies that are not experiments.
  • Distinguish between the manipulation of the independent variable and control of extraneous variables and explain the importance of each.
  • Recognize examples of confounding variables and explain how they affect the internal validity of a study.
  • Define what a control condition is, explain its purpose in research on treatment effectiveness, and describe some alternative types of control conditions.

What Is an Experiment?

As we saw earlier in the book, an  experiment is a type of study designed specifically to answer the question of whether there is a causal relationship between two variables. In other words, whether changes in one variable (referred to as an independent variable ) cause a change in another variable (referred to as a dependent variable ). Experiments have two fundamental features. The first is that the researchers manipulate, or systematically vary, the level of the independent variable. The different levels of the independent variable are called conditions . For example, in Darley and Latané’s experiment, the independent variable was the number of witnesses that participants believed to be present. The researchers manipulated this independent variable by telling participants that there were either one, two, or five other students involved in the discussion, thereby creating three conditions. For a new researcher, it is easy to confuse these terms by believing there are three independent variables in this situation: one, two, or five students involved in the discussion, but there is actually only one independent variable (number of witnesses) with three different levels or conditions (one, two or five students). The second fundamental feature of an experiment is that the researcher exerts control over, or minimizes the variability in, variables other than the independent and dependent variable. These other variables are called extraneous variables . Darley and Latané tested all their participants in the same room, exposed them to the same emergency situation, and so on. They also randomly assigned their participants to conditions so that the three groups would be similar to each other to begin with. Notice that although the words  manipulation  and  control  have similar meanings in everyday language, researchers make a clear distinction between them. They manipulate  the independent variable by systematically changing its levels and control  other variables by holding them constant.

Manipulation of the Independent Variable

Again, to  manipulate an independent variable means to change its level systematically so that different groups of participants are exposed to different levels of that variable, or the same group of participants is exposed to different levels at different times. For example, to see whether expressive writing affects people’s health, a researcher might instruct some participants to write about traumatic experiences and others to write about neutral experiences. The different levels of the independent variable are referred to as conditions , and researchers often give the conditions short descriptive names to make it easy to talk and write about them. In this case, the conditions might be called the “traumatic condition” and the “neutral condition.”

Notice that the manipulation of an independent variable must involve the active intervention of the researcher. Comparing groups of people who differ on the independent variable before the study begins is not the same as manipulating that variable. For example, a researcher who compares the health of people who already keep a journal with the health of people who do not keep a journal has not manipulated this variable and therefore has not conducted an experiment. This distinction  is important because groups that already differ in one way at the beginning of a study are likely to differ in other ways too. For example, people who choose to keep journals might also be more conscientious, more introverted, or less stressed than people who do not. Therefore, any observed difference between the two groups in terms of their health might have been caused by whether or not they keep a journal, or it might have been caused by any of the other differences between people who do and do not keep journals. Thus the active manipulation of the independent variable is crucial for eliminating potential alternative explanations for the results.

Of course, there are many situations in which the independent variable cannot be manipulated for practical or ethical reasons and therefore an experiment is not possible. For example, whether or not people have a significant early illness experience cannot be manipulated, making it impossible to conduct an experiment on the effect of early illness experiences on the development of hypochondriasis. This caveat does not mean it is impossible to study the relationship between early illness experiences and hypochondriasis—only that it must be done using nonexperimental approaches. We will discuss this type of methodology in detail later in the book.

Independent variables can be manipulated to create two conditions and experiments involving a single independent variable with two conditions are often referred to as a single factor two-level design .  However, sometimes greater insights can be gained by adding more conditions to an experiment. When an experiment has one independent variable that is manipulated to produce more than two conditions it is referred to as a single factor multi level design .  So rather than comparing a condition in which there was one witness to a condition in which there were five witnesses (which would represent a single-factor two-level design), Darley and Latané’s experiment used a single factor multi-level design, by manipulating the independent variable to produce three conditions (a one witness, a two witnesses, and a five witnesses condition).

Control of Extraneous Variables

As we have seen previously in the chapter, an  extraneous variable  is anything that varies in the context of a study other than the independent and dependent variables. In an experiment on the effect of expressive writing on health, for example, extraneous variables would include participant variables (individual differences) such as their writing ability, their diet, and their gender. They would also include situational or task variables such as the time of day when participants write, whether they write by hand or on a computer, and the weather. Extraneous variables pose a problem because many of them are likely to have some effect on the dependent variable. For example, participants’ health will be affected by many things other than whether or not they engage in expressive writing. This influencing factor can make it difficult to separate the effect of the independent variable from the effects of the extraneous variables, which is why it is important to control extraneous variables by holding them constant.

Extraneous Variables as “Noise”

Extraneous variables make it difficult to detect the effect of the independent variable in two ways. One is by adding variability or “noise” to the data. Imagine a simple experiment on the effect of mood (happy vs. sad) on the number of happy childhood events people are able to recall. Participants are put into a negative or positive mood (by showing them a happy or sad video clip) and then asked to recall as many happy childhood events as they can. The two leftmost columns of  Table 5.1 show what the data might look like if there were no extraneous variables and the number of happy childhood events participants recalled was affected only by their moods. Every participant in the happy mood condition recalled exactly four happy childhood events, and every participant in the sad mood condition recalled exactly three. The effect of mood here is quite obvious. In reality, however, the data would probably look more like those in the two rightmost columns of  Table 5.1 . Even in the happy mood condition, some participants would recall fewer happy memories because they have fewer to draw on, use less effective recall strategies, or are less motivated. And even in the sad mood condition, some participants would recall more happy childhood memories because they have more happy memories to draw on, they use more effective recall strategies, or they are more motivated. Although the mean difference between the two groups is the same as in the idealized data, this difference is much less obvious in the context of the greater variability in the data. Thus one reason researchers try to control extraneous variables is so their data look more like the idealized data in  Table 5.1 , which makes the effect of the independent variable easier to detect (although real data never look quite  that  good).

One way to control extraneous variables is to hold them constant. This technique can mean holding situation or task variables constant by testing all participants in the same location, giving them identical instructions, treating them in the same way, and so on. It can also mean holding participant variables constant. For example, many studies of language limit participants to right-handed people, who generally have their language areas isolated in their left cerebral hemispheres [1] . Left-handed people are more likely to have their language areas isolated in their right cerebral hemispheres or distributed across both hemispheres, which can change the way they process language and thereby add noise to the data.

In principle, researchers can control extraneous variables by limiting participants to one very specific category of person, such as 20-year-old, heterosexual, female, right-handed psychology majors. The obvious downside to this approach is that it would lower the external validity of the study—in particular, the extent to which the results can be generalized beyond the people actually studied. For example, it might be unclear whether results obtained with a sample of younger lesbian women would apply to older gay men. In many situations, the advantages of a diverse sample (increased external validity) outweigh the reduction in noise achieved by a homogeneous one.

Extraneous Variables as Confounding Variables

The second way that extraneous variables can make it difficult to detect the effect of the independent variable is by becoming confounding variables. A confounding variable  is an extraneous variable that differs on average across  levels of the independent variable (i.e., it is an extraneous variable that varies systematically with the independent variable). For example, in almost all experiments, participants’ intelligence quotients (IQs) will be an extraneous variable. But as long as there are participants with lower and higher IQs in each condition so that the average IQ is roughly equal across the conditions, then this variation is probably acceptable (and may even be desirable). What would be bad, however, would be for participants in one condition to have substantially lower IQs on average and participants in another condition to have substantially higher IQs on average. In this case, IQ would be a confounding variable.

To confound means to confuse , and this effect is exactly why confounding variables are undesirable. Because they differ systematically across conditions—just like the independent variable—they provide an alternative explanation for any observed difference in the dependent variable.  Figure 5.1  shows the results of a hypothetical study, in which participants in a positive mood condition scored higher on a memory task than participants in a negative mood condition. But if IQ is a confounding variable—with participants in the positive mood condition having higher IQs on average than participants in the negative mood condition—then it is unclear whether it was the positive moods or the higher IQs that caused participants in the first condition to score higher. One way to avoid confounding variables is by holding extraneous variables constant. For example, one could prevent IQ from becoming a confounding variable by limiting participants only to those with IQs of exactly 100. But this approach is not always desirable for reasons we have already discussed. A second and much more general approach—random assignment to conditions—will be discussed in detail shortly.

Figure 5.1 Hypothetical Results From a Study on the Effect of Mood on Memory. Because IQ also differs across conditions, it is a confounding variable.

Treatment and Control Conditions

In psychological research, a treatment is any intervention meant to change people’s behavior for the better. This intervention includes psychotherapies and medical treatments for psychological disorders but also interventions designed to improve learning, promote conservation, reduce prejudice, and so on. To determine whether a treatment works, participants are randomly assigned to either a treatment condition , in which they receive the treatment, or a control condition , in which they do not receive the treatment. If participants in the treatment condition end up better off than participants in the control condition—for example, they are less depressed, learn faster, conserve more, express less prejudice—then the researcher can conclude that the treatment works. In research on the effectiveness of psychotherapies and medical treatments, this type of experiment is often called a randomized clinical trial .

There are different types of control conditions. In a no-treatment control condition , participants receive no treatment whatsoever. One problem with this approach, however, is the existence of placebo effects. A placebo is a simulated treatment that lacks any active ingredient or element that should make it effective, and a placebo effect is a positive effect of such a treatment. Many folk remedies that seem to work—such as eating chicken soup for a cold or placing soap under the bed sheets to stop nighttime leg cramps—are probably nothing more than placebos. Although placebo effects are not well understood, they are probably driven primarily by people’s expectations that they will improve. Having the expectation to improve can result in reduced stress, anxiety, and depression, which can alter perceptions and even improve immune system functioning (Price, Finniss, & Benedetti, 2008) [2] .

Placebo effects are interesting in their own right (see Note “The Powerful Placebo” ), but they also pose a serious problem for researchers who want to determine whether a treatment works. Figure 5.2 shows some hypothetical results in which participants in a treatment condition improved more on average than participants in a no-treatment control condition. If these conditions (the two leftmost bars in Figure 5.2 ) were the only conditions in this experiment, however, one could not conclude that the treatment worked. It could be instead that participants in the treatment group improved more because they expected to improve, while those in the no-treatment control condition did not.

Figure 5.2 Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions

Fortunately, there are several solutions to this problem. One is to include a placebo control condition , in which participants receive a placebo that looks much like the treatment but lacks the active ingredient or element thought to be responsible for the treatment’s effectiveness. When participants in a treatment condition take a pill, for example, then those in a placebo control condition would take an identical-looking pill that lacks the active ingredient in the treatment (a “sugar pill”). In research on psychotherapy effectiveness, the placebo might involve going to a psychotherapist and talking in an unstructured way about one’s problems. The idea is that if participants in both the treatment and the placebo control groups expect to improve, then any improvement in the treatment group over and above that in the placebo control group must have been caused by the treatment and not by participants’ expectations. This difference is what is shown by a comparison of the two outer bars in Figure 5.4 .

Of course, the principle of informed consent requires that participants be told that they will be assigned to either a treatment or a placebo control condition—even though they cannot be told which until the experiment ends. In many cases the participants who had been in the control condition are then offered an opportunity to have the real treatment. An alternative approach is to use a wait-list control condition , in which participants are told that they will receive the treatment but must wait until the participants in the treatment condition have already received it. This disclosure allows researchers to compare participants who have received the treatment with participants who are not currently receiving it but who still expect to improve (eventually). A final solution to the problem of placebo effects is to leave out the control condition completely and compare any new treatment with the best available alternative treatment. For example, a new treatment for simple phobia could be compared with standard exposure therapy. Because participants in both conditions receive a treatment, their expectations about improvement should be similar. This approach also makes sense because once there is an effective treatment, the interesting question about a new treatment is not simply “Does it work?” but “Does it work better than what is already available?

The Powerful Placebo

Many people are not surprised that placebos can have a positive effect on disorders that seem fundamentally psychological, including depression, anxiety, and insomnia. However, placebos can also have a positive effect on disorders that most people think of as fundamentally physiological. These include asthma, ulcers, and warts (Shapiro & Shapiro, 1999) [3] . There is even evidence that placebo surgery—also called “sham surgery”—can be as effective as actual surgery.

Medical researcher J. Bruce Moseley and his colleagues conducted a study on the effectiveness of two arthroscopic surgery procedures for osteoarthritis of the knee (Moseley et al., 2002) [4] . The control participants in this study were prepped for surgery, received a tranquilizer, and even received three small incisions in their knees. But they did not receive the actual arthroscopic surgical procedure. Note that the IRB would have carefully considered the use of deception in this case and judged that the benefits of using it outweighed the risks and that there was no other way to answer the research question (about the effectiveness of a placebo procedure) without it. The surprising result was that all participants improved in terms of both knee pain and function, and the sham surgery group improved just as much as the treatment groups. According to the researchers, “This study provides strong evidence that arthroscopic lavage with or without débridement [the surgical procedures used] is not better than and appears to be equivalent to a placebo procedure in improving knee pain and self-reported function” (p. 85).

  • Knecht, S., Dräger, B., Deppe, M., Bobe, L., Lohmann, H., Flöel, A., . . . Henningsen, H. (2000). Handedness and hemispheric language dominance in healthy humans. Brain: A Journal of Neurology, 123 (12), 2512-2518. http://dx.doi.org/10.1093/brain/123.12.2512 ↵
  • Price, D. D., Finniss, D. G., & Benedetti, F. (2008). A comprehensive review of the placebo effect: Recent advances and current thought. Annual Review of Psychology, 59 , 565–590. ↵
  • Shapiro, A. K., & Shapiro, E. (1999). The powerful placebo: From ancient priest to modern physician . Baltimore, MD: Johns Hopkins University Press. ↵
  • Moseley, J. B., O’Malley, K., Petersen, N. J., Menke, T. J., Brody, B. A., Kuykendall, D. H., … Wray, N. P. (2002). A controlled trial of arthroscopic surgery for osteoarthritis of the knee. The New England Journal of Medicine, 347 , 81–88. ↵

A type of study designed specifically to answer the question of whether there is a causal relationship between two variables.

The variable the experimenter manipulates.

The variable the experimenter measures (it is the presumed effect).

The different levels of the independent variable to which participants are assigned.

Holding extraneous variables constant in order to separate the effect of the independent variable from the effect of the extraneous variables.

Any variable other than the dependent and independent variable.

Changing the level, or condition, of the independent variable systematically so that different groups of participants are exposed to different levels of that variable, or the same group of participants is exposed to different levels at different times.

An experiment design involving a single independent variable with two conditions.

When an experiment has one independent variable that is manipulated to produce more than two conditions.

An extraneous variable that varies systematically with the independent variable, and thus confuses the effect of the independent variable with the effect of the extraneous one.

Any intervention meant to change people’s behavior for the better.

The condition in which participants receive the treatment.

The condition in which participants do not receive the treatment.

An experiment that researches the effectiveness of psychotherapies and medical treatments.

The condition in which participants receive no treatment whatsoever.

A simulated treatment that lacks any active ingredient or element that is hypothesized to make the treatment effective, but is otherwise identical to the treatment.

An effect that is due to the placebo rather than the treatment.

Condition in which the participants receive a placebo rather than the treatment.

Condition in which participants are told that they will receive the treatment but must wait until the participants in the treatment condition have already received it.

Research Methods in Psychology Copyright © 2019 by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Res Metr Anal

Logo of frontrma

The Use of Research Methods in Psychological Research: A Systematised Review

Salomé elizabeth scholtz.

1 Community Psychosocial Research (COMPRES), School of Psychosocial Health, North-West University, Potchefstroom, South Africa

Werner de Klerk

Leon t. de beer.

2 WorkWell Research Institute, North-West University, Potchefstroom, South Africa

Research methods play an imperative role in research quality as well as educating young researchers, however, the application thereof is unclear which can be detrimental to the field of psychology. Therefore, this systematised review aimed to determine what research methods are being used, how these methods are being used and for what topics in the field. Our review of 999 articles from five journals over a period of 5 years indicated that psychology research is conducted in 10 topics via predominantly quantitative research methods. Of these 10 topics, social psychology was the most popular. The remainder of the conducted methodology is described. It was also found that articles lacked rigour and transparency in the used methodology which has implications for replicability. In conclusion this article, provides an overview of all reported methodologies used in a sample of psychology journals. It highlights the popularity and application of methods and designs throughout the article sample as well as an unexpected lack of rigour with regard to most aspects of methodology. Possible sample bias should be considered when interpreting the results of this study. It is recommended that future research should utilise the results of this study to determine the possible impact on the field of psychology as a science and to further investigation into the use of research methods. Results should prompt the following future research into: a lack or rigour and its implication on replication, the use of certain methods above others, publication bias and choice of sampling method.

Introduction

Psychology is an ever-growing and popular field (Gough and Lyons, 2016 ; Clay, 2017 ). Due to this growth and the need for science-based research to base health decisions on (Perestelo-Pérez, 2013 ), the use of research methods in the broad field of psychology is an essential point of investigation (Stangor, 2011 ; Aanstoos, 2014 ). Research methods are therefore viewed as important tools used by researchers to collect data (Nieuwenhuis, 2016 ) and include the following: quantitative, qualitative, mixed method and multi method (Maree, 2016 ). Additionally, researchers also employ various types of literature reviews to address research questions (Grant and Booth, 2009 ). According to literature, what research method is used and why a certain research method is used is complex as it depends on various factors that may include paradigm (O'Neil and Koekemoer, 2016 ), research question (Grix, 2002 ), or the skill and exposure of the researcher (Nind et al., 2015 ). How these research methods are employed is also difficult to discern as research methods are often depicted as having fixed boundaries that are continuously crossed in research (Johnson et al., 2001 ; Sandelowski, 2011 ). Examples of this crossing include adding quantitative aspects to qualitative studies (Sandelowski et al., 2009 ), or stating that a study used a mixed-method design without the study having any characteristics of this design (Truscott et al., 2010 ).

The inappropriate use of research methods affects how students and researchers improve and utilise their research skills (Scott Jones and Goldring, 2015 ), how theories are developed (Ngulube, 2013 ), and the credibility of research results (Levitt et al., 2017 ). This, in turn, can be detrimental to the field (Nind et al., 2015 ), journal publication (Ketchen et al., 2008 ; Ezeh et al., 2010 ), and attempts to address public social issues through psychological research (Dweck, 2017 ). This is especially important given the now well-known replication crisis the field is facing (Earp and Trafimow, 2015 ; Hengartner, 2018 ).

Due to this lack of clarity on method use and the potential impact of inept use of research methods, the aim of this study was to explore the use of research methods in the field of psychology through a review of journal publications. Chaichanasakul et al. ( 2011 ) identify reviewing articles as the opportunity to examine the development, growth and progress of a research area and overall quality of a journal. Studies such as Lee et al. ( 1999 ) as well as Bluhm et al. ( 2011 ) review of qualitative methods has attempted to synthesis the use of research methods and indicated the growth of qualitative research in American and European journals. Research has also focused on the use of research methods in specific sub-disciplines of psychology, for example, in the field of Industrial and Organisational psychology Coetzee and Van Zyl ( 2014 ) found that South African publications tend to consist of cross-sectional quantitative research methods with underrepresented longitudinal studies. Qualitative studies were found to make up 21% of the articles published from 1995 to 2015 in a similar study by O'Neil and Koekemoer ( 2016 ). Other methods in health psychology, such as Mixed methods research have also been reportedly growing in popularity (O'Cathain, 2009 ).

A broad overview of the use of research methods in the field of psychology as a whole is however, not available in the literature. Therefore, our research focused on answering what research methods are being used, how these methods are being used and for what topics in practice (i.e., journal publications) in order to provide a general perspective of method used in psychology publication. We synthesised the collected data into the following format: research topic [areas of scientific discourse in a field or the current needs of a population (Bittermann and Fischer, 2018 )], method [data-gathering tools (Nieuwenhuis, 2016 )], sampling [elements chosen from a population to partake in research (Ritchie et al., 2009 )], data collection [techniques and research strategy (Maree, 2016 )], and data analysis [discovering information by examining bodies of data (Ktepi, 2016 )]. A systematised review of recent articles (2013 to 2017) collected from five different journals in the field of psychological research was conducted.

Grant and Booth ( 2009 ) describe systematised reviews as the review of choice for post-graduate studies, which is employed using some elements of a systematic review and seldom more than one or two databases to catalogue studies after a comprehensive literature search. The aspects used in this systematised review that are similar to that of a systematic review were a full search within the chosen database and data produced in tabular form (Grant and Booth, 2009 ).

Sample sizes and timelines vary in systematised reviews (see Lowe and Moore, 2014 ; Pericall and Taylor, 2014 ; Barr-Walker, 2017 ). With no clear parameters identified in the literature (see Grant and Booth, 2009 ), the sample size of this study was determined by the purpose of the sample (Strydom, 2011 ), and time and cost constraints (Maree and Pietersen, 2016 ). Thus, a non-probability purposive sample (Ritchie et al., 2009 ) of the top five psychology journals from 2013 to 2017 was included in this research study. Per Lee ( 2015 ) American Psychological Association (APA) recommends the use of the most up-to-date sources for data collection with consideration of the context of the research study. As this research study focused on the most recent trends in research methods used in the broad field of psychology, the identified time frame was deemed appropriate.

Psychology journals were only included if they formed part of the top five English journals in the miscellaneous psychology domain of the Scimago Journal and Country Rank (Scimago Journal & Country Rank, 2017 ). The Scimago Journal and Country Rank provides a yearly updated list of publicly accessible journal and country-specific indicators derived from the Scopus® database (Scopus, 2017b ) by means of the Scimago Journal Rank (SJR) indicator developed by Scimago from the algorithm Google PageRank™ (Scimago Journal & Country Rank, 2017 ). Scopus is the largest global database of abstracts and citations from peer-reviewed journals (Scopus, 2017a ). Reasons for the development of the Scimago Journal and Country Rank list was to allow researchers to assess scientific domains, compare country rankings, and compare and analyse journals (Scimago Journal & Country Rank, 2017 ), which supported the aim of this research study. Additionally, the goals of the journals had to focus on topics in psychology in general with no preference to specific research methods and have full-text access to articles.

The following list of top five journals in 2018 fell within the abovementioned inclusion criteria (1) Australian Journal of Psychology, (2) British Journal of Psychology, (3) Europe's Journal of Psychology, (4) International Journal of Psychology and lastly the (5) Journal of Psychology Applied and Interdisciplinary.

Journals were excluded from this systematised review if no full-text versions of their articles were available, if journals explicitly stated a publication preference for certain research methods, or if the journal only published articles in a specific discipline of psychological research (for example, industrial psychology, clinical psychology etc.).

The researchers followed a procedure (see Figure 1 ) adapted from that of Ferreira et al. ( 2016 ) for systematised reviews. Data collection and categorisation commenced on 4 December 2017 and continued until 30 June 2019. All the data was systematically collected and coded manually (Grant and Booth, 2009 ) with an independent person acting as co-coder. Codes of interest included the research topic, method used, the design used, sampling method, and methodology (the method used for data collection and data analysis). These codes were derived from the wording in each article. Themes were created based on the derived codes and checked by the co-coder. Lastly, these themes were catalogued into a table as per the systematised review design.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0001.jpg

Systematised review procedure.

According to Johnston et al. ( 2019 ), “literature screening, selection, and data extraction/analyses” (p. 7) are specifically tailored to the aim of a review. Therefore, the steps followed in a systematic review must be reported in a comprehensive and transparent manner. The chosen systematised design adhered to the rigour expected from systematic reviews with regard to full search and data produced in tabular form (Grant and Booth, 2009 ). The rigorous application of the systematic review is, therefore discussed in relation to these two elements.

Firstly, to ensure a comprehensive search, this research study promoted review transparency by following a clear protocol outlined according to each review stage before collecting data (Johnston et al., 2019 ). This protocol was similar to that of Ferreira et al. ( 2016 ) and approved by three research committees/stakeholders and the researchers (Johnston et al., 2019 ). The eligibility criteria for article inclusion was based on the research question and clearly stated, and the process of inclusion was recorded on an electronic spreadsheet to create an evidence trail (Bandara et al., 2015 ; Johnston et al., 2019 ). Microsoft Excel spreadsheets are a popular tool for review studies and can increase the rigour of the review process (Bandara et al., 2015 ). Screening for appropriate articles for inclusion forms an integral part of a systematic review process (Johnston et al., 2019 ). This step was applied to two aspects of this research study: the choice of eligible journals and articles to be included. Suitable journals were selected by the first author and reviewed by the second and third authors. Initially, all articles from the chosen journals were included. Then, by process of elimination, those irrelevant to the research aim, i.e., interview articles or discussions etc., were excluded.

To ensure rigourous data extraction, data was first extracted by one reviewer, and an independent person verified the results for completeness and accuracy (Johnston et al., 2019 ). The research question served as a guide for efficient, organised data extraction (Johnston et al., 2019 ). Data was categorised according to the codes of interest, along with article identifiers for audit trails such as authors, title and aims of articles. The categorised data was based on the aim of the review (Johnston et al., 2019 ) and synthesised in tabular form under methods used, how these methods were used, and for what topics in the field of psychology.

The initial search produced a total of 1,145 articles from the 5 journals identified. Inclusion and exclusion criteria resulted in a final sample of 999 articles ( Figure 2 ). Articles were co-coded into 84 codes, from which 10 themes were derived ( Table 1 ).

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0002.jpg

Journal article frequency.

Codes used to form themes (research topics).

These 10 themes represent the topic section of our research question ( Figure 3 ). All these topics except, for the final one, psychological practice , were found to concur with the research areas in psychology as identified by Weiten ( 2010 ). These research areas were chosen to represent the derived codes as they provided broad definitions that allowed for clear, concise categorisation of the vast amount of data. Article codes were categorised under particular themes/topics if they adhered to the research area definitions created by Weiten ( 2010 ). It is important to note that these areas of research do not refer to specific disciplines in psychology, such as industrial psychology; but to broader fields that may encompass sub-interests of these disciplines.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0003.jpg

Topic frequency (international sample).

In the case of developmental psychology , researchers conduct research into human development from childhood to old age. Social psychology includes research on behaviour governed by social drivers. Researchers in the field of educational psychology study how people learn and the best way to teach them. Health psychology aims to determine the effect of psychological factors on physiological health. Physiological psychology , on the other hand, looks at the influence of physiological aspects on behaviour. Experimental psychology is not the only theme that uses experimental research and focuses on the traditional core topics of psychology (for example, sensation). Cognitive psychology studies the higher mental processes. Psychometrics is concerned with measuring capacity or behaviour. Personality research aims to assess and describe consistency in human behaviour (Weiten, 2010 ). The final theme of psychological practice refers to the experiences, techniques, and interventions employed by practitioners, researchers, and academia in the field of psychology.

Articles under these themes were further subdivided into methodologies: method, sampling, design, data collection, and data analysis. The categorisation was based on information stated in the articles and not inferred by the researchers. Data were compiled into two sets of results presented in this article. The first set addresses the aim of this study from the perspective of the topics identified. The second set of results represents a broad overview of the results from the perspective of the methodology employed. The second set of results are discussed in this article, while the first set is presented in table format. The discussion thus provides a broad overview of methods use in psychology (across all themes), while the table format provides readers with in-depth insight into methods used in the individual themes identified. We believe that presenting the data from both perspectives allow readers a broad understanding of the results. Due a large amount of information that made up our results, we followed Cichocka and Jost ( 2014 ) in simplifying our results. Please note that the numbers indicated in the table in terms of methodology differ from the total number of articles. Some articles employed more than one method/sampling technique/design/data collection method/data analysis in their studies.

What follows is the results for what methods are used, how these methods are used, and which topics in psychology they are applied to . Percentages are reported to the second decimal in order to highlight small differences in the occurrence of methodology.

Firstly, with regard to the research methods used, our results show that researchers are more likely to use quantitative research methods (90.22%) compared to all other research methods. Qualitative research was the second most common research method but only made up about 4.79% of the general method usage. Reviews occurred almost as much as qualitative studies (3.91%), as the third most popular method. Mixed-methods research studies (0.98%) occurred across most themes, whereas multi-method research was indicated in only one study and amounted to 0.10% of the methods identified. The specific use of each method in the topics identified is shown in Table 2 and Figure 4 .

Research methods in psychology.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0004.jpg

Research method frequency in topics.

Secondly, in the case of how these research methods are employed , our study indicated the following.

Sampling −78.34% of the studies in the collected articles did not specify a sampling method. From the remainder of the studies, 13 types of sampling methods were identified. These sampling methods included broad categorisation of a sample as, for example, a probability or non-probability sample. General samples of convenience were the methods most likely to be applied (10.34%), followed by random sampling (3.51%), snowball sampling (2.73%), and purposive (1.37%) and cluster sampling (1.27%). The remainder of the sampling methods occurred to a more limited extent (0–1.0%). See Table 3 and Figure 5 for sampling methods employed in each topic.

Sampling use in the field of psychology.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0005.jpg

Sampling method frequency in topics.

Designs were categorised based on the articles' statement thereof. Therefore, it is important to note that, in the case of quantitative studies, non-experimental designs (25.55%) were often indicated due to a lack of experiments and any other indication of design, which, according to Laher ( 2016 ), is a reasonable categorisation. Non-experimental designs should thus be compared with experimental designs only in the description of data, as it could include the use of correlational/cross-sectional designs, which were not overtly stated by the authors. For the remainder of the research methods, “not stated” (7.12%) was assigned to articles without design types indicated.

From the 36 identified designs the most popular designs were cross-sectional (23.17%) and experimental (25.64%), which concurred with the high number of quantitative studies. Longitudinal studies (3.80%), the third most popular design, was used in both quantitative and qualitative studies. Qualitative designs consisted of ethnography (0.38%), interpretative phenomenological designs/phenomenology (0.28%), as well as narrative designs (0.28%). Studies that employed the review method were mostly categorised as “not stated,” with the most often stated review designs being systematic reviews (0.57%). The few mixed method studies employed exploratory, explanatory (0.09%), and concurrent designs (0.19%), with some studies referring to separate designs for the qualitative and quantitative methods. The one study that identified itself as a multi-method study used a longitudinal design. Please see how these designs were employed in each specific topic in Table 4 , Figure 6 .

Design use in the field of psychology.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0006.jpg

Design frequency in topics.

Data collection and analysis —data collection included 30 methods, with the data collection method most often employed being questionnaires (57.84%). The experimental task (16.56%) was the second most preferred collection method, which included established or unique tasks designed by the researchers. Cognitive ability tests (6.84%) were also regularly used along with various forms of interviewing (7.66%). Table 5 and Figure 7 represent data collection use in the various topics. Data analysis consisted of 3,857 occurrences of data analysis categorised into ±188 various data analysis techniques shown in Table 6 and Figures 1 – 7 . Descriptive statistics were the most commonly used (23.49%) along with correlational analysis (17.19%). When using a qualitative method, researchers generally employed thematic analysis (0.52%) or different forms of analysis that led to coding and the creation of themes. Review studies presented few data analysis methods, with most studies categorising their results. Mixed method and multi-method studies followed the analysis methods identified for the qualitative and quantitative studies included.

Data collection in the field of psychology.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0007.jpg

Data collection frequency in topics.

Data analysis in the field of psychology.

Results of the topics researched in psychology can be seen in the tables, as previously stated in this article. It is noteworthy that, of the 10 topics, social psychology accounted for 43.54% of the studies, with cognitive psychology the second most popular research topic at 16.92%. The remainder of the topics only occurred in 4.0–7.0% of the articles considered. A list of the included 999 articles is available under the section “View Articles” on the following website: https://methodgarden.xtrapolate.io/ . This website was created by Scholtz et al. ( 2019 ) to visually present a research framework based on this Article's results.

This systematised review categorised full-length articles from five international journals across the span of 5 years to provide insight into the use of research methods in the field of psychology. Results indicated what methods are used how these methods are being used and for what topics (why) in the included sample of articles. The results should be seen as providing insight into method use and by no means a comprehensive representation of the aforementioned aim due to the limited sample. To our knowledge, this is the first research study to address this topic in this manner. Our discussion attempts to promote a productive way forward in terms of the key results for method use in psychology, especially in the field of academia (Holloway, 2008 ).

With regard to the methods used, our data stayed true to literature, finding only common research methods (Grant and Booth, 2009 ; Maree, 2016 ) that varied in the degree to which they were employed. Quantitative research was found to be the most popular method, as indicated by literature (Breen and Darlaston-Jones, 2010 ; Counsell and Harlow, 2017 ) and previous studies in specific areas of psychology (see Coetzee and Van Zyl, 2014 ). Its long history as the first research method (Leech et al., 2007 ) in the field of psychology as well as researchers' current application of mathematical approaches in their studies (Toomela, 2010 ) might contribute to its popularity today. Whatever the case may be, our results show that, despite the growth in qualitative research (Demuth, 2015 ; Smith and McGannon, 2018 ), quantitative research remains the first choice for article publication in these journals. Despite the included journals indicating openness to articles that apply any research methods. This finding may be due to qualitative research still being seen as a new method (Burman and Whelan, 2011 ) or reviewers' standards being higher for qualitative studies (Bluhm et al., 2011 ). Future research is encouraged into the possible biasness in publication of research methods, additionally further investigation with a different sample into the proclaimed growth of qualitative research may also provide different results.

Review studies were found to surpass that of multi-method and mixed method studies. To this effect Grant and Booth ( 2009 ), state that the increased awareness, journal contribution calls as well as its efficiency in procuring research funds all promote the popularity of reviews. The low frequency of mixed method studies contradicts the view in literature that it's the third most utilised research method (Tashakkori and Teddlie's, 2003 ). Its' low occurrence in this sample could be due to opposing views on mixing methods (Gunasekare, 2015 ) or that authors prefer publishing in mixed method journals, when using this method, or its relative novelty (Ivankova et al., 2016 ). Despite its low occurrence, the application of the mixed methods design in articles was methodologically clear in all cases which were not the case for the remainder of research methods.

Additionally, a substantial number of studies used a combination of methodologies that are not mixed or multi-method studies. Perceived fixed boundaries are according to literature often set aside, as confirmed by this result, in order to investigate the aim of a study, which could create a new and helpful way of understanding the world (Gunasekare, 2015 ). According to Toomela ( 2010 ), this is not unheard of and could be considered a form of “structural systemic science,” as in the case of qualitative methodology (observation) applied in quantitative studies (experimental design) for example. Based on this result, further research into this phenomenon as well as its implications for research methods such as multi and mixed methods is recommended.

Discerning how these research methods were applied, presented some difficulty. In the case of sampling, most studies—regardless of method—did mention some form of inclusion and exclusion criteria, but no definite sampling method. This result, along with the fact that samples often consisted of students from the researchers' own academic institutions, can contribute to literature and debates among academics (Peterson and Merunka, 2014 ; Laher, 2016 ). Samples of convenience and students as participants especially raise questions about the generalisability and applicability of results (Peterson and Merunka, 2014 ). This is because attention to sampling is important as inappropriate sampling can debilitate the legitimacy of interpretations (Onwuegbuzie and Collins, 2017 ). Future investigation into the possible implications of this reported popular use of convenience samples for the field of psychology as well as the reason for this use could provide interesting insight, and is encouraged by this study.

Additionally, and this is indicated in Table 6 , articles seldom report the research designs used, which highlights the pressing aspect of the lack of rigour in the included sample. Rigour with regards to the applied empirical method is imperative in promoting psychology as a science (American Psychological Association, 2020 ). Omitting parts of the research process in publication when it could have been used to inform others' research skills should be questioned, and the influence on the process of replicating results should be considered. Publications are often rejected due to a lack of rigour in the applied method and designs (Fonseca, 2013 ; Laher, 2016 ), calling for increased clarity and knowledge of method application. Replication is a critical part of any field of scientific research and requires the “complete articulation” of the study methods used (Drotar, 2010 , p. 804). The lack of thorough description could be explained by the requirements of certain journals to only report on certain aspects of a research process, especially with regard to the applied design (Laher, 20). However, naming aspects such as sampling and designs, is a requirement according to the APA's Journal Article Reporting Standards (JARS-Quant) (Appelbaum et al., 2018 ). With very little information on how a study was conducted, authors lose a valuable opportunity to enhance research validity, enrich the knowledge of others, and contribute to the growth of psychology and methodology as a whole. In the case of this research study, it also restricted our results to only reported samples and designs, which indicated a preference for certain designs, such as cross-sectional designs for quantitative studies.

Data collection and analysis were for the most part clearly stated. A key result was the versatile use of questionnaires. Researchers would apply a questionnaire in various ways, for example in questionnaire interviews, online surveys, and written questionnaires across most research methods. This may highlight a trend for future research.

With regard to the topics these methods were employed for, our research study found a new field named “psychological practice.” This result may show the growing consciousness of researchers as part of the research process (Denzin and Lincoln, 2003 ), psychological practice, and knowledge generation. The most popular of these topics was social psychology, which is generously covered in journals and by learning societies, as testaments of the institutional support and richness social psychology has in the field of psychology (Chryssochoou, 2015 ). The APA's perspective on 2018 trends in psychology also identifies an increased amount of psychology focus on how social determinants are influencing people's health (Deangelis, 2017 ).

This study was not without limitations and the following should be taken into account. Firstly, this study used a sample of five specific journals to address the aim of the research study, despite general journal aims (as stated on journal websites), this inclusion signified a bias towards the research methods published in these specific journals only and limited generalisability. A broader sample of journals over a different period of time, or a single journal over a longer period of time might provide different results. A second limitation is the use of Excel spreadsheets and an electronic system to log articles, which was a manual process and therefore left room for error (Bandara et al., 2015 ). To address this potential issue, co-coding was performed to reduce error. Lastly, this article categorised data based on the information presented in the article sample; there was no interpretation of what methodology could have been applied or whether the methods stated adhered to the criteria for the methods used. Thus, a large number of articles that did not clearly indicate a research method or design could influence the results of this review. However, this in itself was also a noteworthy result. Future research could review research methods of a broader sample of journals with an interpretive review tool that increases rigour. Additionally, the authors also encourage the future use of systematised review designs as a way to promote a concise procedure in applying this design.

Our research study presented the use of research methods for published articles in the field of psychology as well as recommendations for future research based on these results. Insight into the complex questions identified in literature, regarding what methods are used how these methods are being used and for what topics (why) was gained. This sample preferred quantitative methods, used convenience sampling and presented a lack of rigorous accounts for the remaining methodologies. All methodologies that were clearly indicated in the sample were tabulated to allow researchers insight into the general use of methods and not only the most frequently used methods. The lack of rigorous account of research methods in articles was represented in-depth for each step in the research process and can be of vital importance to address the current replication crisis within the field of psychology. Recommendations for future research aimed to motivate research into the practical implications of the results for psychology, for example, publication bias and the use of convenience samples.

Ethics Statement

This study was cleared by the North-West University Health Research Ethics Committee: NWU-00115-17-S1.

Author Contributions

All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

  • Aanstoos C. M. (2014). Psychology . Available online at: http://eds.a.ebscohost.com.nwulib.nwu.ac.za/eds/detail/detail?sid=18de6c5c-2b03-4eac-94890145eb01bc70%40sessionmgr4006&vid$=$1&hid$=$4113&bdata$=$JnNpdGU9ZWRzL~WxpdmU%3d#AN$=$93871882&db$=$ers
  • American Psychological Association (2020). Science of Psychology . Available online at: https://www.apa.org/action/science/
  • Appelbaum M., Cooper H., Kline R. B., Mayo-Wilson E., Nezu A. M., Rao S. M. (2018). Journal article reporting standards for quantitative research in psychology: the APA Publications and Communications Board task force report . Am. Psychol. 73 :3. 10.1037/amp0000191 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bandara W., Furtmueller E., Gorbacheva E., Miskon S., Beekhuyzen J. (2015). Achieving rigor in literature reviews: insights from qualitative data analysis and tool-support . Commun. Ass. Inform. Syst. 37 , 154–204. 10.17705/1CAIS.03708 [ CrossRef ] [ Google Scholar ]
  • Barr-Walker J. (2017). Evidence-based information needs of public health workers: a systematized review . J. Med. Libr. Assoc. 105 , 69–79. 10.5195/JMLA.2017.109 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bittermann A., Fischer A. (2018). How to identify hot topics in psychology using topic modeling . Z. Psychol. 226 , 3–13. 10.1027/2151-2604/a000318 [ CrossRef ] [ Google Scholar ]
  • Bluhm D. J., Harman W., Lee T. W., Mitchell T. R. (2011). Qualitative research in management: a decade of progress . J. Manage. Stud. 48 , 1866–1891. 10.1111/j.1467-6486.2010.00972.x [ CrossRef ] [ Google Scholar ]
  • Breen L. J., Darlaston-Jones D. (2010). Moving beyond the enduring dominance of positivism in psychological research: implications for psychology in Australia . Aust. Psychol. 45 , 67–76. 10.1080/00050060903127481 [ CrossRef ] [ Google Scholar ]
  • Burman E., Whelan P. (2011). Problems in / of Qualitative Research . Maidenhead: Open University Press/McGraw Hill. [ Google Scholar ]
  • Chaichanasakul A., He Y., Chen H., Allen G. E. K., Khairallah T. S., Ramos K. (2011). Journal of Career Development: a 36-year content analysis (1972–2007) . J. Career. Dev. 38 , 440–455. 10.1177/0894845310380223 [ CrossRef ] [ Google Scholar ]
  • Chryssochoou X. (2015). Social Psychology . Inter. Encycl. Soc. Behav. Sci. 22 , 532–537. 10.1016/B978-0-08-097086-8.24095-6 [ CrossRef ] [ Google Scholar ]
  • Cichocka A., Jost J. T. (2014). Stripped of illusions? Exploring system justification processes in capitalist and post-Communist societies . Inter. J. Psychol. 49 , 6–29. 10.1002/ijop.12011 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Clay R. A. (2017). Psychology is More Popular Than Ever. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trends-popular
  • Coetzee M., Van Zyl L. E. (2014). A review of a decade's scholarly publications (2004–2013) in the South African Journal of Industrial Psychology . SA. J. Psychol . 40 , 1–16. 10.4102/sajip.v40i1.1227 [ CrossRef ] [ Google Scholar ]
  • Counsell A., Harlow L. (2017). Reporting practices and use of quantitative methods in Canadian journal articles in psychology . Can. Psychol. 58 , 140–147. 10.1037/cap0000074 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Deangelis T. (2017). Targeting Social Factors That Undermine Health. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trend-social-factors
  • Demuth C. (2015). New directions in qualitative research in psychology . Integr. Psychol. Behav. Sci. 49 , 125–133. 10.1007/s12124-015-9303-9 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Denzin N. K., Lincoln Y. (2003). The Landscape of Qualitative Research: Theories and Issues , 2nd Edn. London: Sage. [ Google Scholar ]
  • Drotar D. (2010). A call for replications of research in pediatric psychology and guidance for authors . J. Pediatr. Psychol. 35 , 801–805. 10.1093/jpepsy/jsq049 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dweck C. S. (2017). Is psychology headed in the right direction? Yes, no, and maybe . Perspect. Psychol. Sci. 12 , 656–659. 10.1177/1745691616687747 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Earp B. D., Trafimow D. (2015). Replication, falsification, and the crisis of confidence in social psychology . Front. Psychol. 6 :621. 10.3389/fpsyg.2015.00621 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ezeh A. C., Izugbara C. O., Kabiru C. W., Fonn S., Kahn K., Manderson L., et al.. (2010). Building capacity for public and population health research in Africa: the consortium for advanced research training in Africa (CARTA) model . Glob. Health Action 3 :5693. 10.3402/gha.v3i0.5693 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ferreira A. L. L., Bessa M. M. M., Drezett J., De Abreu L. C. (2016). Quality of life of the woman carrier of endometriosis: systematized review . Reprod. Clim. 31 , 48–54. 10.1016/j.recli.2015.12.002 [ CrossRef ] [ Google Scholar ]
  • Fonseca M. (2013). Most Common Reasons for Journal Rejections . Available online at: http://www.editage.com/insights/most-common-reasons-for-journal-rejections
  • Gough B., Lyons A. (2016). The future of qualitative research in psychology: accentuating the positive . Integr. Psychol. Behav. Sci. 50 , 234–243. 10.1007/s12124-015-9320-8 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grant M. J., Booth A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies . Health Info. Libr. J. 26 , 91–108. 10.1111/j.1471-1842.2009.00848.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grix J. (2002). Introducing students to the generic terminology of social research . Politics 22 , 175–186. 10.1111/1467-9256.00173 [ CrossRef ] [ Google Scholar ]
  • Gunasekare U. L. T. P. (2015). Mixed research method as the third research paradigm: a literature review . Int. J. Sci. Res. 4 , 361–368. Available online at: https://ssrn.com/abstract=2735996 [ Google Scholar ]
  • Hengartner M. P. (2018). Raising awareness for the replication crisis in clinical psychology by focusing on inconsistencies in psychotherapy Research: how much can we rely on published findings from efficacy trials? Front. Psychol. 9 :256. 10.3389/fpsyg.2018.00256 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Holloway W. (2008). Doing intellectual disagreement differently . Psychoanal. Cult. Soc. 13 , 385–396. 10.1057/pcs.2008.29 [ CrossRef ] [ Google Scholar ]
  • Ivankova N. V., Creswell J. W., Plano Clark V. L. (2016). Foundations and Approaches to mixed methods research , in First Steps in Research , 2nd Edn. K. Maree (Pretoria: Van Schaick Publishers; ), 306–335. [ Google Scholar ]
  • Johnson M., Long T., White A. (2001). Arguments for British pluralism in qualitative health research . J. Adv. Nurs. 33 , 243–249. 10.1046/j.1365-2648.2001.01659.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Johnston A., Kelly S. E., Hsieh S. C., Skidmore B., Wells G. A. (2019). Systematic reviews of clinical practice guidelines: a methodological guide . J. Clin. Epidemiol. 108 , 64–72. 10.1016/j.jclinepi.2018.11.030 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ketchen D. J., Jr., Boyd B. K., Bergh D. D. (2008). Research methodology in strategic management: past accomplishments and future challenges . Organ. Res. Methods 11 , 643–658. 10.1177/1094428108319843 [ CrossRef ] [ Google Scholar ]
  • Ktepi B. (2016). Data Analytics (DA) . Available online at: https://eds-b-ebscohost-com.nwulib.nwu.ac.za/eds/detail/detail?vid=2&sid=24c978f0-6685-4ed8-ad85-fa5bb04669b9%40sessionmgr101&bdata=JnNpdGU9ZWRzLWxpdmU%3d#AN=113931286&db=ers
  • Laher S. (2016). Ostinato rigore: establishing methodological rigour in quantitative research . S. Afr. J. Psychol. 46 , 316–327. 10.1177/0081246316649121 [ CrossRef ] [ Google Scholar ]
  • Lee C. (2015). The Myth of the Off-Limits Source . Available online at: http://blog.apastyle.org/apastyle/research/
  • Lee T. W., Mitchell T. R., Sablynski C. J. (1999). Qualitative research in organizational and vocational psychology, 1979–1999 . J. Vocat. Behav. 55 , 161–187. 10.1006/jvbe.1999.1707 [ CrossRef ] [ Google Scholar ]
  • Leech N. L., Anthony J., Onwuegbuzie A. J. (2007). A typology of mixed methods research designs . Sci. Bus. Media B. V Qual. Quant 43 , 265–275. 10.1007/s11135-007-9105-3 [ CrossRef ] [ Google Scholar ]
  • Levitt H. M., Motulsky S. L., Wertz F. J., Morrow S. L., Ponterotto J. G. (2017). Recommendations for designing and reviewing qualitative research in psychology: promoting methodological integrity . Qual. Psychol. 4 , 2–22. 10.1037/qup0000082 [ CrossRef ] [ Google Scholar ]
  • Lowe S. M., Moore S. (2014). Social networks and female reproductive choices in the developing world: a systematized review . Rep. Health 11 :85. 10.1186/1742-4755-11-85 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Maree K. (2016). Planning a research proposal , in First Steps in Research , 2nd Edn, ed Maree K. (Pretoria: Van Schaik Publishers; ), 49–70. [ Google Scholar ]
  • Maree K., Pietersen J. (2016). Sampling , in First Steps in Research, 2nd Edn , ed Maree K. (Pretoria: Van Schaik Publishers; ), 191–202. [ Google Scholar ]
  • Ngulube P. (2013). Blending qualitative and quantitative research methods in library and information science in sub-Saharan Africa . ESARBICA J. 32 , 10–23. Available online at: http://hdl.handle.net/10500/22397 . [ Google Scholar ]
  • Nieuwenhuis J. (2016). Qualitative research designs and data-gathering techniques , in First Steps in Research , 2nd Edn, ed Maree K. (Pretoria: Van Schaik Publishers; ), 71–102. [ Google Scholar ]
  • Nind M., Kilburn D., Wiles R. (2015). Using video and dialogue to generate pedagogic knowledge: teachers, learners and researchers reflecting together on the pedagogy of social research methods . Int. J. Soc. Res. Methodol. 18 , 561–576. 10.1080/13645579.2015.1062628 [ CrossRef ] [ Google Scholar ]
  • O'Cathain A. (2009). Editorial: mixed methods research in the health sciences—a quiet revolution . J. Mix. Methods 3 , 1–6. 10.1177/1558689808326272 [ CrossRef ] [ Google Scholar ]
  • O'Neil S., Koekemoer E. (2016). Two decades of qualitative research in psychology, industrial and organisational psychology and human resource management within South Africa: a critical review . SA J. Indust. Psychol. 42 , 1–16. 10.4102/sajip.v42i1.1350 [ CrossRef ] [ Google Scholar ]
  • Onwuegbuzie A. J., Collins K. M. (2017). The role of sampling in mixed methods research enhancing inference quality . Köln Z Soziol. 2 , 133–156. 10.1007/s11577-017-0455-0 [ CrossRef ] [ Google Scholar ]
  • Perestelo-Pérez L. (2013). Standards on how to develop and report systematic reviews in psychology and health . Int. J. Clin. Health Psychol. 13 , 49–57. 10.1016/S1697-2600(13)70007-3 [ CrossRef ] [ Google Scholar ]
  • Pericall L. M. T., Taylor E. (2014). Family function and its relationship to injury severity and psychiatric outcome in children with acquired brain injury: a systematized review . Dev. Med. Child Neurol. 56 , 19–30. 10.1111/dmcn.12237 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Peterson R. A., Merunka D. R. (2014). Convenience samples of college students and research reproducibility . J. Bus. Res. 67 , 1035–1041. 10.1016/j.jbusres.2013.08.010 [ CrossRef ] [ Google Scholar ]
  • Ritchie J., Lewis J., Elam G. (2009). Designing and selecting samples , in Qualitative Research Practice: A Guide for Social Science Students and Researchers , 2nd Edn, ed Ritchie J., Lewis J. (London: Sage; ), 1–23. [ Google Scholar ]
  • Sandelowski M. (2011). When a cigar is not just a cigar: alternative perspectives on data and data analysis . Res. Nurs. Health 34 , 342–352. 10.1002/nur.20437 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sandelowski M., Voils C. I., Knafl G. (2009). On quantitizing . J. Mix. Methods Res. 3 , 208–222. 10.1177/1558689809334210 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Scholtz S. E., De Klerk W., De Beer L. T. (2019). A data generated research framework for conducting research methods in psychological research .
  • Scimago Journal & Country Rank (2017). Available online at: http://www.scimagojr.com/journalrank.php?category=3201&year=2015
  • Scopus (2017a). About Scopus . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).
  • Scopus (2017b). Document Search . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).
  • Scott Jones J., Goldring J. E. (2015). ‘I' m not a quants person'; key strategies in building competence and confidence in staff who teach quantitative research methods . Int. J. Soc. Res. Methodol. 18 , 479–494. 10.1080/13645579.2015.1062623 [ CrossRef ] [ Google Scholar ]
  • Smith B., McGannon K. R. (2018). Developing rigor in quantitative research: problems and opportunities within sport and exercise psychology . Int. Rev. Sport Exerc. Psychol. 11 , 101–121. 10.1080/1750984X.2017.1317357 [ CrossRef ] [ Google Scholar ]
  • Stangor C. (2011). Introduction to Psychology . Available online at: http://www.saylor.org/books/
  • Strydom H. (2011). Sampling in the quantitative paradigm , in Research at Grass Roots; For the Social Sciences and Human Service Professions , 4th Edn, eds de Vos A. S., Strydom H., Fouché C. B., Delport C. S. L. (Pretoria: Van Schaik Publishers; ), 221–234. [ Google Scholar ]
  • Tashakkori A., Teddlie C. (2003). Handbook of Mixed Methods in Social & Behavioural Research . Thousand Oaks, CA: SAGE publications. [ Google Scholar ]
  • Toomela A. (2010). Quantitative methods in psychology: inevitable and useless . Front. Psychol. 1 :29. 10.3389/fpsyg.2010.00029 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Truscott D. M., Swars S., Smith S., Thornton-Reid F., Zhao Y., Dooley C., et al.. (2010). A cross-disciplinary examination of the prevalence of mixed methods in educational research: 1995–2005 . Int. J. Soc. Res. Methodol. 13 , 317–328. 10.1080/13645570903097950 [ CrossRef ] [ Google Scholar ]
  • Weiten W. (2010). Psychology Themes and Variations . Belmont, CA: Wadsworth. [ Google Scholar ]

B.A. in Psychology

What Is Experimental Psychology?

what is experimental research in psychology

The science of psychology spans several fields. There are dozens of disciplines in psychology, including abnormal psychology, cognitive psychology and social psychology.

One way to view these fields is to separate them into two types: applied vs. experimental psychology. These groups describe virtually any type of work in psychology.

The following sections explore what experimental psychology is and some examples of what it covers.

Experimental psychology seeks to explore and better understand behavior through empirical research methods. This work allows findings to be employed in real-world applications (applied psychology) across fields such as clinical psychology, educational psychology, forensic psychology, sports psychology, and social psychology. Experimental psychology is able to shed light on people’s personalities and life experiences by examining what the way people behave and how behavior is shaped throughout life, along with other theoretical questions. The field looks at a wide range of behavioral topics including sensation, perception, attention, memory, cognition, and emotion, according to the  American Psychological Association  (APA).

Research is the focus of experimental psychology. Using scientific methods to collect data and perform research, experimental psychology focuses on certain questions, and, one study at a time, reveals information that contributes to larger findings or a conclusion. Due to the breadth and depth of certain areas of study, researchers can spend their entire careers looking at a complex research question.

Experimental Psychology in Action

The APA  writes about  one experimental psychologist, Robert McCann, who is now retired after 19 years working at NASA. During his time at NASA, his work focused on the user experience — on land and in space — where he applied his expertise to cockpit system displays, navigation systems, and safety displays used by astronauts in NASA spacecraft. McCann’s knowledge of human information processing allowed him to help NASA design shuttle displays that can increase the safety of shuttle missions. He looked at human limitations of attention and display processing to gauge what people can reliably see and correctly interpret on an instrument panel. McCann played a key role in helping determining the features of cockpit displays without overloading the pilot or taxing their attention span.

“One of the purposes of the display was to alert the astronauts to the presence of a failure that interrupted power in a specific region,” McCann said, “The most obvious way to depict this interruption was to simply remove (or dim) the white line(s) connecting the affected components. Basic research on visual attention has shown that humans do not notice the removal of a display feature very easily when the display is highly cluttered. We are much better at noticing a feature or object that is suddenly added to a display.” McCann utilized his knowledge in experimental psychology to research and develop this very important development for NASA. 

Valve Corporation

Another experimental psychologist, Mike Ambinder, uses his expertise to help design video games. He is a senior experimental psychologist at Valve Corporation, a video game developer and developer of the software distribution platform Steam. Ambinder told  Orlando Weekly  that his career working on gaming hits such as Portal 2 and Left 4 Dead “epitomizes the intersection between scientific innovation and electronic entertainment.” His career started when he gave a presentation to Valve on applying psychology to game design; this occurred while he was finishing his PhD in experimental design. “I’m very lucky to have landed at a company where freedom and autonomy and analytical decision-making are prized,” he said. “I realized how fortunate I was to work for a company that would encourage someone with a background in psychology to see what they could contribute in a field where they had no prior experience.” 

Ambinder spends his time on data analysis, hardware research, play-testing methodologies, and on any aspect of games where knowledge of human behavior could be useful. Ambinder described Valve’s process for refining a product as straightforward. “We come up with a game design (our hypothesis), and we place it in front of people external to the company (our play-test or experiment). We gather their feedback, and then iterate and improve the design (refining the theory). It’s essentially the scientific method applied to game design, and the end result is the consequence of many hours of applying this process.” To gather play-test data, Ambinder is engaged in the newer field of biofeedback technology, which can quantify gamers’ enjoyment. His research looks at unobtrusive measurements of facial expressions that can achieve such goals. Ambinder is also examining eye-tracking as a next-generation input method.

Pursue Your Career Goals in Psychology

Develop a greater understanding of psychology concepts and applications with Concordia St. Paul’s  online bachelor’s in psychology . Enjoy small class sizes with a personal learning environment geared toward your success, and learn from knowledgeable faculty who have industry experience. 

helpful professor logo

Experimental Psychology: 10 Examples & Definition

experimental psychology exmaples and definition, explained below

Experimental psychology refers to studying psychological phenomena using scientific methods. Originally, the primary scientific method involved manipulating one variable and observing systematic changes in another variable.

Today, psychologists utilize several types of scientific methodologies.

Experimental psychology examines a wide range of psychological phenomena, including: memory, sensation and perception, cognitive processes, motivation, emotion, developmental processes, in addition to the neurophysiological concomitants of each of these subjects.

Studies are conducted on both animal and human participants, and must comply with stringent requirements and controls regarding the ethical treatment of both.

Definition of Experimental Psychology

Experimental psychology is a branch of psychology that utilizes scientific methods to investigate the mind and behavior.

It involves the systematic and controlled study of human and animal behavior through observation and experimentation .

Experimental psychologists design and conduct experiments to understand cognitive processes, perception, learning, memory, emotion, and many other aspects of psychology. They often manipulate variables ( independent variables ) to see how this affects behavior or mental processes (dependent variables).

The findings from experimental psychology research are often used to better understand human behavior and can be applied in a range of contexts, such as education, health, business, and more.

Experimental Psychology Examples

1. The Puzzle Box Studies (Thorndike, 1898) Placing different cats in a box that can only be escaped by pulling a cord, and then taking detailed notes on how long it took for them to escape allowed Edward Thorndike to derive the Law of Effect: actions followed by positive consequences are more likely to occur again, and actions followed by negative consequences are less likely to occur again (Thorndike, 1898).

2. Reinforcement Schedules (Skinner, 1956) By placing rats in a Skinner Box and changing when and how often the rats are rewarded for pressing a lever, it is possible to identify how each schedule results in different behavior patterns (Skinner, 1956). This led to a wide range of theoretical ideas around how rewards and consequences can shape the behaviors of both animals and humans.

3. Observational Learning (Bandura, 1980) Some children watch a video of an adult punching and kicking a Bobo doll. Other children watch a video in which the adult plays nicely with the doll. By carefully observing the children’s behavior later when in a room with a Bobo doll, researchers can determine if television violence affects children’s behavior (Bandura, 1980).

4. The Fallibility of Memory (Loftus & Palmer, 1974) A group of participants watch the same video of two cars having an accident. Two weeks later, some are asked to estimate the rate of speed the cars were going when they “smashed” into each other. Some participants are asked to estimate the rate of speed the cars were going when they “bumped” into each other. Changing the phrasing of the question changes the memory of the eyewitness.

5. Intrinsic Motivation in the Classroom (Dweck, 1990) To investigate the role of autonomy on intrinsic motivation, half of the students are told they are “free to choose” which tasks to complete. The other half of the students are told they “must choose” some of the tasks. Researchers then carefully observe how long the students engage in the tasks and later ask them some questions about if they enjoyed doing the tasks or not.

6. Systematic Desensitization (Wolpe, 1958) A clinical psychologist carefully documents his treatment of a patient’s social phobia with progressive relaxation. At first, the patient is trained to monitor, tense, and relax various muscle groups while viewing photos of parties. Weeks later, they approach a stranger to ask for directions, initiate a conversation on a crowded bus, and attend a small social gathering. The therapist’s notes are transcribed into a scientific report and published in a peer-reviewed journal.

7. Study of Remembering (Bartlett, 1932) Bartlett’s work is a seminal study in the field of memory, where he used the concept of “schema” to describe an organized pattern of thought or behavior. He conducted a series of experiments using folk tales to show that memory recall is influenced by cultural schemas and personal experiences.

8. Study of Obedience (Milgram, 1963) This famous study explored the conflict between obedience to authority and personal conscience. Milgram found that a majority of participants were willing to administer what they believed were harmful electric shocks to a stranger when instructed by an authority figure, highlighting the power of authority and situational factors in driving behavior.

9. Pavlov’s Dog Study (Pavlov, 1927) Ivan Pavlov, a Russian physiologist, conducted a series of experiments that became a cornerstone in the field of experimental psychology. Pavlov noticed that dogs would salivate when they saw food. He then began to ring a bell each time he presented the food to the dogs. After a while, the dogs began to salivate merely at the sound of the bell. This experiment demonstrated the principle of “classical conditioning.”

10, Piaget’s Stages of Development (Piaget, 1958) Jean Piaget proposed a theory of cognitive development in children that consists of four distinct stages: the sensorimotor stage (birth to 2 years), where children learn about the world through their senses and motor activities, through to the the formal operational stage (12 years and beyond), where abstract reasoning and hypothetical thinking develop. Piaget’s theory is an example of experimental psychology as it was developed through systematic observation and experimentation on children’s problem-solving behaviors .

Types of Research Methodologies in Experimental Psychology 

Researchers utilize several different types of research methodologies since the early days of Wundt (1832-1920).

1. The Experiment

The experiment involves the researcher manipulating the level of one variable, called the Independent Variable (IV), and then observing changes in another variable, called the Dependent Variable (DV).

The researcher is interested in determining if the IV causes changes in the DV. For example, does television violence make children more aggressive?

So, some children in the study, called research participants, will watch a show with TV violence, called the treatment group. Others will watch a show with no TV violence, called the control group.

So, there are two levels of the IV: violence and no violence. Next, children will be observed to see if they act more aggressively. This is the DV.

If TV violence makes children more aggressive, then the children that watched the violent show will me more aggressive than the children that watched the non-violent show.

A key requirement of the experiment is random assignment . Each research participant is assigned to one of the two groups in a way that makes it a completely random process. This means that each group will have a mix of children: different personality types, diverse family backgrounds, and range of intelligence levels.

2. The Longitudinal Study

A longitudinal study involves selecting a sample of participants and then following them for years, or decades, periodically collecting data on the variables of interest.

For example, a researcher might be interested in determining if parenting style affects academic performance of children. Parenting style is called the predictor variable , and academic performance is called the outcome variable .

Researchers will begin by randomly selecting a group of children to be in the study. Then, they will identify the type of parenting practices used when the children are 4 and 5 years old.

A few years later, perhaps when the children are 8 and 9, the researchers will collect data on their grades. This process can be repeated over the next 10 years, including through college.

If parenting style has an effect on academic performance, then the researchers will see a connection between the predictor variable and outcome variable.

Children raised with parenting style X will have higher grades than children raised with parenting style Y.

3. The Case Study

The case study is an in-depth study of one individual. This is a research methodology often used early in the examination of a psychological phenomenon or therapeutic treatment.

For example, in the early days of treating phobias, a clinical psychologist may try teaching one of their patients how to relax every time they see the object that creates so much fear and anxiety, such as a large spider.

The therapist would take very detailed notes on how the teaching process was implemented and the reactions of the patient. When the treatment had been completed, those notes would be written in a scientific form and submitted for publication in a scientific journal for other therapists to learn from.

There are several other types of methodologies available which vary different aspects of the three described above. The researcher will select a methodology that is most appropriate to the phenomenon they want to examine.

They also must take into account various practical considerations such as how much time and resources are needed to complete the study. Conducting research always costs money.

People and equipment are needed to carry-out every study, so researchers often try to obtain funding from their university or a government agency. 

Origins and Key Developments in Experimental Psychology

timeline of experimental psychology, explained below

Wilhelm Maximilian Wundt (1832-1920) is considered one of the fathers of modern psychology. He was a physiologist and philosopher and helped establish psychology as a distinct discipline (Khaleefa, 1999).  

In 1879 he established the world’s first psychology research lab at the University of Leipzig. This is considered a key milestone for establishing psychology as a scientific discipline. In addition to being the first person to use the term “psychologist,” to describe himself, he also founded the discipline’s first scientific journal Philosphische Studien in 1883.

Another notable figure in the development of experimental psychology is Ernest Weber . Trained as a physician, Weber studied sensation and perception and created the first quantitative law in psychology.

The equation denotes how judgments of sensory differences are relative to previous levels of sensation, referred to as the just-noticeable difference (jnd). This is known today as Weber’s Law (Hergenhahn, 2009).    

Gustav Fechner , one of Weber’s students, published the first book on experimental psychology in 1860, titled Elemente der Psychophysik. His worked centered on the measurement of psychophysical facets of sensation and perception, with many of his methods still in use today.    

The first American textbook on experimental psychology was Elements of Physiological Psychology, published in 1887 by George Trumball Ladd .

Ladd also established a psychology lab at Yale University, while Stanley Hall and Charles Sanders continued Wundt’s work at a lab at Johns Hopkins University.

In the late 1800s, Charles Pierce’s contribution to experimental psychology is especially noteworthy because he invented the concept of random assignment (Stigler, 1992; Dehue, 1997).

Go Deeper: 15 Random Assignment Examples

This procedure ensures that each participant has an equal chance of being placed in any of the experimental groups (e.g., treatment or control group). This eliminates the influence of confounding factors related to inherent characteristics of the participants.

Random assignment is a fundamental criterion for a study to be considered a valid experiment.

From there, experimental psychology flourished in the 20th century as a science and transformed into an approach utilized in cognitive psychology, developmental psychology, and social psychology .

Today, the term experimental psychology refers to the study of a wide range of phenomena and involves methodologies not limited to the manipulation of variables.

The Scientific Process and Experimental Psychology

The one thing that makes psychology a science and distinguishes it from its roots in philosophy is the reliance upon the scientific process to answer questions. This makes psychology a science was the main goal of its earliest founders such as Wilhelm Wundt.

There are numerous steps in the scientific process, outlined in the graphic below.

an overview of the scientific process, summarized in text in the appendix

1. Observation

First, the scientist observes an interesting phenomenon that sparks a question. For example, are the memories of eyewitnesses really reliable, or are they subject to bias or unintentional manipulation?

2. Hypothesize

Next, this question is converted into a testable hypothesis. For instance: the words used to question a witness can influence what they think they remember.

3. Devise a Study

Then the researcher(s) select a methodology that will allow them to test that hypothesis. In this case, the researchers choose the experiment, which will involve randomly assigning some participants to different conditions.

In one condition, participants are asked a question that implies a certain memory (treatment group), while other participants are asked a question which is phrased neutrally and does not imply a certain memory (control group).

The researchers then write a proposal that describes in detail the procedures they want to use, how participants will be selected, and the safeguards they will employ to ensure the rights of the participants.

That proposal is submitted to an Institutional Review Board (IRB). The IRB is comprised of a panel of researchers, community representatives, and other professionals that are responsible for reviewing all studies involving human participants.

4. Conduct the Study

If the IRB accepts the proposal, then the researchers may begin collecting data. After the data has been collected, it is analyzed using a software program such as SPSS.

Those analyses will either support or reject the hypothesis. That is, either the participants’ memories were affected by the wording of the question, or not.

5. Publish the study

Finally, the researchers write a paper detailing their procedures and results of the statistical analyses. That paper is then submitted to a scientific journal.

The lead editor of that journal will then send copies of the paper to 3-5 experts in that subject. Each of those experts will read the paper and basically try to find as many things wrong with it as possible. Because they are experts, they are very good at this task.

After reading those critiques, most likely, the editor will send the paper back to the researchers and require that they respond to the criticisms, collect more data, or reject the paper outright.

In some cases, the study was so well-done that the criticisms were minimal and the editor accepts the paper. It then gets published in the scientific journal several months later.

That entire process can easily take 2 years, usually more. But, the findings of that study went through a very rigorous process. This means that we can have substantial confidence that the conclusions of the study are valid.

Experimental psychology refers to utilizing a scientific process to investigate psychological phenomenon.

There are a variety of methods employed today. They are used to study a wide range of subjects, including memory, cognitive processes, emotions and the neurophysiological basis of each.

The history of psychology as a science began in the 1800s primarily in Germany. As interest grew, the field expanded to the United States where several influential research labs were established.

As more methodologies were developed, the field of psychology as a science evolved into a prolific scientific discipline that has provided invaluable insights into human behavior.

Bartlett, F. C., & Bartlett, F. C. (1995).  Remembering: A study in experimental and social psychology . Cambridge university press.

Dehue, T. (1997). Deception, efficiency, and random groups: Psychology and the gradual origination of the random group design. Isis , 88 (4), 653-673.

Ebbinghaus, H. (2013). Memory: A contribution to experimental psychology.  Annals of neurosciences ,  20 (4), 155.

Hergenhahn, B. R. (2009). An introduction to the history of psychology. Belmont. CA: Wadsworth Cengage Learning .

Khaleefa, O. (1999). Who is the founder of psychophysics and experimental psychology? American Journal of Islam and Society , 16 (2), 1-26.

Loftus, E. F., & Palmer, J. C. (1974).  Reconstruction of auto-mobile destruction : An example of the interaction between language and memory.  Journal of Verbal Learning and Verbal behavior , 13, 585-589.

Pavlov, I.P. (1927). Conditioned reflexes . Dover, New York.

Piaget, J. (1959).  The language and thought of the child  (Vol. 5). Psychology Press.

Piaget, J., Fraisse, P., & Reuchlin, M. (2014). Experimental psychology its scope and method: Volume I (Psychology Revivals): History and method . Psychology Press.

Skinner, B. F. (1956). A case history in scientlfic method. American Psychologist, 11 , 221-233

Stigler, S. M. (1992). A historical view of statistical concepts in psychology and educational research. American Journal of Education , 101 (1), 60-70.

Thorndike, E. L. (1898). Animal intelligence: An experimental study of the associative processes in animals. Psychological Review Monograph Supplement 2 .

Wolpe, J. (1958). Psychotherapy by reciprocal inhibition. Stanford, CA: Stanford University Press.

Appendix: Images reproduced as Text

Definition: Experimental psychology is a branch of psychology that focuses on conducting systematic and controlled experiments to study human behavior and cognition.

Overview: Experimental psychology aims to gather empirical evidence and explore cause-and-effect relationships between variables. Experimental psychologists utilize various research methods, including laboratory experiments, surveys, and observations, to investigate topics such as perception, memory, learning, motivation, and social behavior .

Example: The Pavlov’s Dog experimental psychology experiment used scientific methods to develop a theory about how learning and association occur in animals. The same concepts were subsequently used in the study of humans, wherein psychology-based ideas about learning were developed. Pavlov’s use of the empirical evidence was foundational to the study’s success.

Experimental Psychology Milestones:

1890: William James publishes “The Principles of Psychology”, a foundational text in the field of psychology.

1896: Lightner Witmer opens the first psychological clinic at the University of Pennsylvania, marking the beginning of clinical psychology.

1913: John B. Watson publishes “Psychology as the Behaviorist Views It”, marking the beginning of Behaviorism.

1920: Hermann Rorschach introduces the Rorschach inkblot test.

1938: B.F. Skinner introduces the concept of operant conditioning .

1967: Ulric Neisser publishes “Cognitive Psychology” , marking the beginning of the cognitive revolution.

1980: The third edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-III) is published, introducing a new classification system for mental disorders.

The Scientific Process

  • Observe an interesting phenomenon
  • Formulate testable hypothesis
  • Select methodology and design study
  • Submit research proposal to IRB
  • Collect and analyzed data; write paper
  • Submit paper for critical reviews

Dave

Dave Cornell (PhD)

Dr. Cornell has worked in education for more than 20 years. His work has involved designing teacher certification for Trinity College in London and in-service training for state governments in the United States. He has trained kindergarten teachers in 8 countries and helped businessmen and women open baby centers and kindergartens in 3 countries.

  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ 25 Positive Punishment Examples
  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ 25 Dissociation Examples (Psychology)
  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ 15 Zone of Proximal Development Examples
  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ Perception Checking: 15 Examples and Definition

Chris

Chris Drew (PhD)

This article was peer-reviewed and edited by Chris Drew (PhD). The review process on Helpful Professor involves having a PhD level expert fact check, edit, and contribute to articles. Reviewers ensure all content reflects expert academic consensus and is backed up with reference to academic studies. Dr. Drew has published over 20 academic articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education and holds a PhD in Education from ACU.

  • Chris Drew (PhD) #molongui-disabled-link 25 Positive Punishment Examples
  • Chris Drew (PhD) #molongui-disabled-link 25 Dissociation Examples (Psychology)
  • Chris Drew (PhD) #molongui-disabled-link 15 Zone of Proximal Development Examples
  • Chris Drew (PhD) #molongui-disabled-link Perception Checking: 15 Examples and Definition

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

In the late 1960s social psychologists John Darley and Bibb Latané proposed a counter-intuitive hypothesis. The more witnesses there are to an accident or a crime, the less likely any of them is to help the victim (Darley & Latané, 1968) [1] .

 They also suggested the theory that this phenomenon occurs because each witness feels less responsible for helping—a process referred to as the “diffusion of responsibility.” Darley and Latané noted that their ideas were consistent with many real-world cases. For example, a New York woman named Catherine “Kitty” Genovese was assaulted and murdered while several witnesses evidently failed to help. But Darley and Latané also understood that such isolated cases did not provide convincing evidence for their hypothesized “bystander effect.” There was no way to know, for example, whether any of the witnesses to Kitty Genovese’s murder would have helped had there been fewer of them.

So to test their hypothesis, Darley and Latané created a simulated emergency situation in a laboratory. Each of their university student participants was isolated in a small room and told that he or she would be having a discussion about university life with other students via an intercom system. Early in the discussion, however, one of the students began having what seemed to be an epileptic seizure. Over the intercom came the following: “I could really-er-use some help so if somebody would-er-give me a little h-help-uh-er-er-er-er-er c-could somebody-er-er-help-er-uh-uh-uh (choking sounds)…I’m gonna die-er-er-I’m…gonna die-er-help-er-er-seizure-er- [chokes, then quiet]” (Darley & Latané, 1968, p. 379) [2] .

In actuality, there were no other students. These comments had been prerecorded and were played back to create the appearance of a real emergency. The key to the study was that some participants were told that the discussion involved only one other student (the victim), others were told that it involved two other students, and still others were told that it included five other students. Because this was the only difference between these three groups of participants, any difference in their tendency to help the victim would have to have been caused by it. And sure enough, the likelihood that the participant left the room to seek help for the “victim” decreased from 85% to 62% to 31% as the number of “witnesses” increased.

The Parable of the 38 Witnesses

The story of Kitty Genovese has been told and retold in numerous psychology textbooks. The standard version is that there were 38 witnesses to the crime, that all of them watched (or listened) for an extended period of time, and that none of them did anything to help. However, recent scholarship suggests that the standard story is inaccurate in many ways (Manning, Levine, & Collins, 2007) [3] . For example, only six eyewitnesses testified at the trial, none of them was aware that he or she was witnessing a lethal assault, and there have been several reports of witnesses calling the police or even coming to the aid of Kitty Genovese. Although the standard story inspired a long line of research on the bystander effect and the diffusion of responsibility, it may also have directed researchers’ and students’ attention away from other equally interesting and important issues in the psychology of helping—including the conditions in which people do in fact respond collectively to emergency situations.

The research that Darley and Latané conducted was a particular kind of study called an experiment. Experiments are used to determine not only whether there is a meaningful relationship between two variables but also whether the relationship is a causal one that is supported by statistical analysis. For this reason, experiments are one of the most common and useful tools in the psychological researcher’s toolbox. In this chapter, we look at experiments in detail. We will first consider what sets experiments apart from other kinds of studies and why they support causal conclusions while other kinds of studies do not. We then look at two basic ways of designing an experiment—between-subjects designs and within-subjects designs—and discuss their pros and cons. Finally, we consider several important practical issues that arise when conducting experiments.

  • Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility. Journal of Personality and Social Psychology, 4 , 377–383. ↵
  • Manning, R., Levine, M., & Collins, A. (2007). The Kitty Genovese murder and the social psychology of helping: The parable of the 38 witnesses. American Psychologist, 62 , 555–562. ↵

Creative Commons License

Share This Book

  • Increase Font Size

Using Science to Inform Educational Practices

Experimental Research

As you’ve learned, the only way to establish that there is a cause-and-effect relationship between two variables is to conduct a scientific experiment. Experiment has a different meaning in the scientific context than in everyday life. In everyday conversation, we often use it to describe trying something for the first time, such as experimenting with a new hairstyle or new food. However, in the scientific context, an experiment has precise requirements for design and implementation.

Video 2.8.1.  Experimental Research Design  provides explanation and examples for correlational research. A closed-captioned version of this video is available here .

The Experimental Hypothesis

In order to conduct an experiment, a researcher must have a specific hypothesis to be tested. As you’ve learned, hypotheses can be formulated either through direct observation of the real world or after careful review of previous research. For example, if you think that children should not be allowed to watch violent programming on television because doing so would cause them to behave more violently, then you have basically formulated a hypothesis—namely, that watching violent television programs causes children to behave more violently. How might you have arrived at this particular hypothesis? You may have younger relatives who watch cartoons featuring characters using martial arts to save the world from evildoers, with an impressive array of punching, kicking, and defensive postures. You notice that after watching these programs for a while, your young relatives mimic the fighting behavior of the characters portrayed in the cartoon. Seeing behavior like this right after a child watches violent television programming might lead you to hypothesize that viewing violent television programming leads to an increase in the display of violent behaviors. These sorts of personal observations are what often lead us to formulate a specific hypothesis, but we cannot use limited personal observations and anecdotal evidence to test our hypothesis rigorously. Instead, to find out if real-world data supports our hypothesis, we have to conduct an experiment.

Designing an Experiment

The most basic experimental design involves two groups: the experimental group and the control group. The two groups are designed to be the same except for one difference— experimental manipulation. The  experimental group  gets the experimental manipulation—that is, the treatment or variable being tested (in this case, violent TV images)—and the  control group  does not. Since experimental manipulation is the only difference between the experimental and control groups, we can be sure that any differences between the two are due to experimental manipulation rather than chance.

In our example of how violent television programming might affect violent behavior in children, we have the experimental group view violent television programming for a specified time and then measure their violent behavior. We measure the violent behavior in our control group after they watch nonviolent television programming for the same amount of time. It is important for the control group to be treated similarly to the experimental group, with the exception that the control group does not receive the experimental manipulation. Therefore, we have the control group watch non-violent television programming for the same amount of time as the experimental group.

We also need to define precisely, or operationalize, what is considered violent and nonviolent. An  operational definition  is a description of how we will measure our variables, and it is important in allowing others to understand exactly how and what a researcher measures in a particular experiment. In operationalizing violent behavior, we might choose to count only physical acts like kicking or punching as instances of this behavior, or we also may choose to include angry verbal exchanges. Whatever we determine, it is important that we operationalize violent behavior in such a way that anyone who hears about our study for the first time knows exactly what we mean by violence. This aids peoples’ ability to interpret our data as well as their capacity to repeat our experiment should they choose to do so.

Once we have operationalized what is considered violent television programming and what is considered violent behavior from our experiment participants, we need to establish how we will run our experiment. In this case, we might have participants watch a 30-minute television program (either violent or nonviolent, depending on their group membership) before sending them out to a playground for an hour where their behavior is observed and the number and type of violent acts are recorded.

Ideally, the people who observe and record the children’s behavior are unaware of who was assigned to the experimental or control group, in order to control for experimenter bias.  Experimenter bias  refers to the possibility that a researcher’s expectations might skew the results of the study. Remember, conducting an experiment requires a lot of planning, and the people involved in the research project have a vested interest in supporting their hypotheses. If the observers knew which child was in which group, it might influence how much attention they paid to each child’s behavior as well as how they interpreted that behavior. By being blind to which child is in which group, we protect against those biases. This situation is a  single-blind study , meaning that the participants are unaware as to which group they are in (experiment or control group) while the researcher knows which participants are in each group.

In a  double-blind study , both the researchers and the participants are blind to group assignments. Why would a researcher want to run a study where no one knows who is in which group? Because by doing so, we can control for both experimenter and participant expectations. If you are familiar with the phrase  placebo effect , you already have some idea as to why this is an important consideration. The placebo effect occurs when people’s expectations or beliefs influence or determine their experience in a given situation. In other words, simply expecting something to happen can actually make it happen.

what is experimental research in psychology

Why is that? Imagine that you are a participant in this study, and you have just taken a pill that you think will improve your mood. Because you expect the pill to have an effect, you might feel better simply because you took the pill and not because of any drug actually contained in the pill—this is the placebo effect.

To make sure that any effects on mood are due to the drug and not due to expectations, the control group receives a placebo (in this case, a sugar pill). Now everyone gets a pill, and once again, neither the researcher nor the experimental participants know who got the drug and who got the sugar pill. Any differences in mood between the experimental and control groups can now be attributed to the drug itself rather than to experimenter bias or participant expectations.

Video 2.8.2.  Introduction to Experimental Design introduces fundamental elements for experimental research design.

Independent and Dependent Variables

In a research experiment, we strive to study whether changes in one thing cause changes in another. To achieve this, we must pay attention to two important variables, or things that can be changed, in any experimental study: the independent variable and the dependent variable. An  independent variable  is manipulated or controlled by the experimenter. In a well-designed experimental study, the independent variable is the only important difference between the experimental and control groups. In our example of how violent television programs affect children’s display of violent behavior, the independent variable is the type of program—violent or nonviolent—viewed by participants in the study (Figure 2.3). A  dependent variable  is what the researcher measures to see how much effect the independent variable had. In our example, the dependent variable is the number of violent acts displayed by the experimental participants.

what is experimental research in psychology

Figure  2.8.1.  In an experiment, manipulations of the independent variable are expected to result in changes in the dependent variable.

We expect that the dependent variable will change as a function of the independent variable. In other words, the dependent variable  depends  on the independent variable. A good way to think about the relationship between the independent and dependent variables is with this question: What effect does the independent variable have on the dependent variable? Returning to our example, what effect does watching a half-hour of violent television programming or nonviolent television programming have on the number of incidents of physical aggression displayed on the playground?

Selecting and Assigning Experimental Participants

Now that our study is designed, we need to obtain a sample of individuals to include in our experiment. Our study involves human participants, so we need to determine who to include.  Participants  are the subjects of psychological research, and as the name implies, individuals who are involved in psychological research actively participate in the process. Often, psychological research projects rely on college students to serve as participants. In fact, the vast majority of research in psychology subfields has historically involved students as research participants (Sears, 1986; Arnett, 2008). But are college students truly representative of the general population? College students tend to be younger, more educated, more liberal, and less diverse than the general population. Although using students as test subjects is an accepted practice, relying on such a limited pool of research participants can be problematic because it is difficult to generalize findings to the larger population.

Our hypothetical experiment involves children, and we must first generate a sample of child participants. Samples are used because populations are usually too large to reasonably involve every member in our particular experiment (Figure 2.4). If possible, we should use a random sample (there are other types of samples, but for the purposes of this chapter, we will focus on random samples). A  random sample  is a subset of a larger population in which every member of the population has an equal chance of being selected. Random samples are preferred because if the sample is large enough we can be reasonably sure that the participating individuals are representative of the larger population. This means that the percentages of characteristics in the sample—sex, ethnicity, socioeconomic level, and any other characteristics that might affect the results—are close to those percentages in the larger population.

In our example, let’s say we decide our population of interest is fourth graders. But all fourth graders is a very large population, so we need to be more specific; instead, we might say our population of interest is all fourth graders in a particular city. We should include students from various income brackets, family situations, races, ethnicities, religions, and geographic areas of town. With this more manageable population, we can work with the local schools in selecting a random sample of around 200 fourth-graders that we want to participate in our experiment.

In summary, because we cannot test all of the fourth graders in a city, we want to find a group of about 200 that reflects the composition of that city. With a representative group, we can generalize our findings to the larger population without fear of our sample being biased in some way.

what is experimental research in psychology

Figure  2.8.2.  Researchers may work with (a) a large population or (b) a sample group that is a subset of the larger population.

Now that we have a sample, the next step of the experimental process is to split the participants into experimental and control groups through random assignment. With  random assignment , all participants have an equal chance of being assigned to either group. There is statistical software that will randomly assign each of the fourth graders in the sample to either the experimental or the control group.

Random assignment is critical for sound experimental design. With sufficiently large samples, random assignment makes it unlikely that there are systematic differences between the groups. So, for instance, it would be improbable that we would get one group composed entirely of males, a given ethnic identity, or a given religious ideology. This is important because if the groups were systematically different before the experiment began, we would not know the origin of any differences we find between the groups: Were the differences preexisting, or were they caused by manipulation of the independent variable? Random assignment allows us to assume that any differences observed between experimental and control groups result from the manipulation of the independent variable.

Exercise 2.2 Randomization in Sampling and Assignment

Use this  online tool to generate randomized numbers instantly and to learn more about random sampling and assignments.

Issues to Consider

While experiments allow scientists to make cause-and-effect claims, they are not without problems. True experiments require the experimenter to manipulate an independent variable, and that can complicate many questions that psychologists might want to address. For instance, imagine that you want to know what effect sex (the independent variable) has on spatial memory (the dependent variable). Although you can certainly look for differences between males and females on a task that taps into spatial memory, you cannot directly control a person’s sex. We categorize this type of research approach as quasi-experimental and recognize that we cannot make cause-and-effect claims in these circumstances.

Experimenters are also limited by ethical constraints. For instance, you would not be able to conduct an experiment designed to determine if experiencing abuse as a child leads to lower levels of self-esteem among adults. To conduct such an experiment, you would need to randomly assign some experimental participants to a group that receives abuse, and that experiment would be unethical.

Interpreting Experimental Findings

Once data is collected from both the experimental and the control groups, a  statistical analysis  is conducted to find out if there are meaningful differences between the two groups. The statistical analysis determines how likely any difference found is due to chance (and thus not meaningful). In psychology, group differences are considered meaningful, or significant, if the odds that these differences occurred by chance alone are 5 percent or less. Stated another way, if we repeated this experiment 100 times, we would expect to find the same results at least 95 times out of 100.

The greatest strength of experiments is the ability to assert that any significant differences in the findings are caused by the independent variable. This occurs because of random selection, random assignment, and a design that limits the effects of both experimenter bias and participant expectancy should create groups that are similar in composition and treatment. Therefore, any difference between the groups is attributable to the independent variable, and now we can finally make a causal statement. If we find that watching a violent television program results in more violent behavior than watching a nonviolent program, we can safely say that watching violent television programs causes an increase in the display of violent behavior.

Candela Citations

  • Experimental Research. Authored by : Nicole Arduini-Van Hoose. Provided by : Hudson Valley Community College. Retrieved from : https://courses.lumenlearning.com/edpsy/chapter/experimental-research/. License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike
  • Experimental Research. Authored by : Nicole Arduini-Van Hoose. Provided by : Hudson Valley Community College. Retrieved from : https://courses.lumenlearning.com/adolescent/chapter/experimental-research/. Project : https://courses.lumenlearning.com/adolescent/chapter/experimental-research/. License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike

Educational Psychology Copyright © 2020 by Nicole Arduini-Van Hoose is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • What is New
  • Download Your Software
  • Behavioral Research
  • Software for Consumer Research
  • Software for Human Factors R&D
  • Request Live Demo
  • Contact Sales

Sensor Hardware

Man wearing VR headset

We carry a range of biosensors from the top hardware producers. All compatible with iMotions

iMotions for Higher Education

Imotions for business.

what is experimental research in psychology

How to Measure the 4 Types of Attention – with Biosensors 

Work and Safety

Morten Pedersen

what is experimental research in psychology

Enhancing Safety in Road-Based Transportation through Human Factors R&D

News & events.

iMotions Lab

  • iMotions Online
  • Eye Tracking
  • Eye Tracking Screen Based
  • Eye Tracking VR
  • Eye Tracking Glasses
  • Eye Tracking Webcam
  • FEA (Facial Expression Analysis)
  • Voice Analysis
  • EDA/GSR (Electrodermal Activity)
  • EEG (Electroencephalography)
  • ECG (Electrocardiography)
  • EMG (Electromyography)
  • Respiration
  • iMotions Lab: New features
  • iMotions Lab: Developers
  • EEG sensors
  • Sensory and Perceptual
  • Consumer Inights
  • Human Factors R&D
  • Work Environments, Training and Safety
  • Customer Stories
  • Published Research Papers
  • Document Library
  • Customer Support Program
  • Help Center
  • Release Notes
  • Contact Support
  • Partnerships
  • Mission Statement
  • Ownership and Structure
  • Executive Management
  • Job Opportunities

Publications

  • Newsletter Sign Up

What is Experimental Psychology?

Bryn Farnsworth

Bryn Farnsworth

Table of Contents

The mind is a complicated place. Fortunately, the scientific method is perfectly equipped to deal with complexity. If we put these two things together we have the field of experimental psychology, broadly defined as the scientific study of the mind. The word “experimental” in this context means that tests are administered to participants, outcomes are measured, and comparisons are made.

More formally, this means that a group of participants are exposed to a stimulus (or stimuli), and their behavior in response is recorded. This behavior is compared to some kind of control condition, which could be either a neutral stimulus, the absence of a stimulus, or against a control group (who maybe do nothing at all).

Experimental psychology is concerned with testing theories of human thoughts, feelings, actions, and beyond – any aspect of being human that involves the mind. This is a broad category that features many branches within it (e.g. behavioral psychology , cognitive psychology). Below, we will go through a brief history of experimental psychology, the aspects that characterize it, and outline research that has gone on to shape this field.

A Brief History of Experimental Psychology

As with anything, and perhaps particularly with scientific ideas, it’s difficult to pinpoint the exact moment in which a thought or approach was conceived. One of the best candidates with which to credit the emergence of experimental psychology with is Gustav Fechner who came to prominence in the 1830’s. After completing his Ph.D in biology at the University of Leipzig [1], and continuing his work as a professor, he made a significant breakthrough in the conception of mental states.

Scientists later wrote about Fechner’s breakthrough for understanding perception: “An increase in the intensity of a stimulus, Fechner argued, does not produce a one-to-one increase in the intensity of the sensation … For example, adding the sound of one bell to that of an already ringing bell produces a greater increase in sensation than adding one bell to 10 others already ringing. Therefore, the effects of stimulus intensities are not absolute but are relative to the amount of sensation that already exists.” [2]

portrait of Gustav Fechner

This ultimately meant that mental perception is responsive to the material world – the mind doesn’t passively respond to a stimulus (if that was the case, there would be a linear relationship between the intensity of a stimulus and the actual perception of it), but is instead dynamically responsive to it. This conception ultimately shapes much of experimental psychology, and the grounding theory: that the response of the brain to the environment can be quantified .

Fechner went on to research within this area for many subsequent years, testing new ideas regarding human perception. Meanwhile, another German scientist working in Heidelberg to the West, began his work on the problem of multitasking, and created the next paradigm shift for experimental psychology. The scientist was Wilhem Wundt, who had followed the work of Gustav Fechner.

Wilhem Wundt is often credited with being “the father of experimental psychology” and is the founding point for many aspects of it. He began the first experimental psychology lab, scientific journal, and ultimately formalized the approach as a science. Wundt set in stone what Fechner had put on paper.

The next scientist to advance the field of experimental psychology was influenced directly by reading Fechner’s book “ Elements of Psychophysics ”. Hermann Ebbinghaus, once again a German scientist, carried out the first properly formalized research into memory and forgetting, by using long lists of (mostly) nonsense syllables (such as: “VAW”, “TEL”, “BOC”) and recording how long it took for people to forget them.

Experiments using this list, concerning learning and memory, would take up much of Ebbinghaus’ career, and help cement experimental psychology as a science. There are many other scientists’ whose contributions helped pave the way for the direction, approach, and success of experimental psychology (Hermann von Helmholtz, Ernst Weber, and Mary Whiton Calkins, to name just a few) – all played a part in creating the field as we know it today. The work that they did defined the field, providing it with characteristics that we’ll now go through below.

Interested in Human Behavior and Psychology?

Sign up to our newsletter to get the latest articles and research send to you

what is experimental research in psychology

What Defines Experimental Psychology?

Defining any scientific field is in itself no exact science – there are inevitably aspects that will be missed. However, experimental psychology features at least three central components that define it: empiricism, falsifiability, and determinism . These features are central to experimental psychology but also many other fields within science.

Pipette in a beaker with liquid in it

Empiricism refers to the collection of data that can support or refute a theory. In opposition to purely theoretical reasoning, empiricism is concerned with observations that can be tested. It is based on the idea that all knowledge stems from observations that can be perceived, and data surrounding them can be collected to form experiments.

Falsifiability is a foundational aspect of all contemporary scientific work. Karl Popper , a 20th century philosopher, formalized this concept – that for any theory to be scientific there must be a way to falsify it. Otherwise, ludicrous, but unprovable claims could be made with equal weight as the most rigorously tested theories.

For example, the Theory of Relativity is scientific, for example, because it is possible that evidence could emerge to disprove it. This means that it can be tested. An example of an unfalsifiable argument is that the earth is younger than it appears, but that it was created to appear older than it is – any evidence against this is dismissed within the argument itself, rendering it impossible to falsify, and therefore untestable.

Determinism refers to the notion that any event has a cause before it. Applied to mental states, this means that the brain responds to stimuli, and that these responses can ultimately be predicted, given the correct data.

These aspects of experimental psychology run throughout the research carried out within this field. There are thousands of articles featuring research that have been carried out within this vein – below we will go through just a few of the most influential and well-cited studies that have shaped this field, and look to the future of experimental psychology.

Classic Studies in Experimental Psychology

Little albert.

One of the most notorious studies within experimental psychology was also one of the foundational pieces of research for behaviorism. Popularly known as the study of “Little Albert”, this experiment, carried out in 1920, focused on whether a baby could be made to fear a stimulus through conditioning (conditioning refers to the association of a response to a stimulus) [3].

The psychologist, John B. Watson , devised an experiment in which a baby was exposed to an unconditioned stimulus (in this case, a white rat) at the same time as a fear-inducing stimulus (the loud, sudden sound of a hammer hitting a metal bar). The repetition of this loud noise paired with the appearance of the white rat eventually led to the white rat becoming a conditioned stimulus – inducing the fear response even without the sound of the hammer.

White rat with red eyes looking at the camera from inside a cage

While the study was clearly problematic, and wouldn’t (and shouldn’t!) clear any ethical boards today, it was hugely influential for its time, showing how human emotional responses can be shaped intentionally by conditioning – a feat only carried out with animals prior to this [4].

Watson, later referred to by a previous professor of his as a person “who thought too highly of himself and was more interested in his own ideas than in people” [5], was later revered and reviled in equal measure [2]. While his approach has since been rightly questioned, the study was a breakthrough for the conception of human behavior .

Asch’s Conformity Experiment

Three decades following Watson’s infamous experiment, beliefs were studied rather than behavior. Research carried out by Solomon Asch in 1951 showed how the influence of group pressure could make people say what they didn’t believe.

The goal was to examine how social pressures “induce individuals to resist or to yield to group pressures when the latter are perceived to be contrary to fact” [6]. Participant’s were introduced to a group of seven people in which, unbeknownst to them, all other individuals were actors hired by Asch. The task was introduced as a perceptual test, in which the length of lines was to be compared.

Asch conformity study example lines

Sets of lines were shown to the group of participants – three on one card, one on another (as in the image above). The apparent task was to compare the three lines and say which was most like the single line in length. The answers were plainly obvious, and in one-on-one testing, participants got a correct answer over 99% of the time. Yet in this group setting, in which each actor, one after the other, incorrectly said an incorrect line out loud, the answers of the participants would change.

On average, around 38% of the answers the participants gave were incorrect – a huge jump from the less than 1% reported in non-group settings. The study was hugely influential for showing how our actions can be impacted by the environment we are placed in, particularly when it comes to social factors.

The Invisible Gorilla

If you don’t know this research from the title already, then it’s best experienced by watching the video below, and counting the number of ball passes.

The research of course has little to do with throwing a ball around, but more to do with the likelihood of not seeing the person in a gorilla costume who appears in the middle of the screen for eight seconds. The research, carried out in 1999, investigated how our attentional resources can impact how we perceive the world [7]. The term “ inattentional blindness ” refers to the effective blindness of our perceptions when our attention is engaged in another task.

The study tested how attentional processing is distributed, suggesting that objects that are more relevant to the task are more likely to be seen than objects which simply have close spatial proximity (very roughly – something expected is more likely to be seen even if it’s further away, whereas something unexpected is less likely to be seen even if it’s close).

The research not only showed the effect of our perceptions on our experience, but also has real-world implications. A replication of this study was done using eye tracking to record the visual search of radiologists who were instructed to look for nodules on one of several X-rays of lungs [8]. As the researchers state “A gorilla, 48 times the size of the average nodule, was inserted in the last case that was presented . Eighty-three percent of the radiologists did not see the gorilla.”

The original study, and research that followed since, has been crucial for showing how our expectations about the environment can shape our perceptions. Modern research has built upon each of the ideas and studies that have been carried out across almost 200 years.

what is experimental research in psychology

Powering Human Insights

The complete research platform for psychology experiments

The Future of Experimental Psychology

The majority of this article has been concerned with what experimental psychology is, where it comes from, and what it has achieved so far. An inevitable follow-up question to this is – where is it going?

While predictions are difficult to make, there are at least indications. The best place to look is to experts in the field. Schultz and Schultz refer to modern psychology “as the science of behavior and mental processes instead of only behavior, a science seeking to explain overt behavior and its relationship to mental processes.” [2].

The Association for Psychological Science (APS) asked for forecasts from several prominent psychology researchers ( original article available here ), and received some of the following responses.

Association for Psychological Science logo

Lauri Nummenmaa (Assistant professor, Aalto University, Finland) predicts a similar path to Schultz and Schultz, stating that “a major aim of the future psychological science would involve re-establishing the link between the brain and behavior”. While Modupe Akinola (Assistant professor, Columbia Business School) hopes “that advancements in technology will allow for more unobtrusive ways of measuring bodily responses”.

Kristen Lindquist (Assistant professor of psychology, University of North Carolina School of Medicine) centers in on emotional responses, saying that “We are just beginning to understand how a person’s expectations, knowledge, and prior experiences shape his or her emotions. Emotions play a role in every moment of waking life from decisions to memories to feelings, so understanding emotions will help us to understand the mind more generally.”

Tal Yarkoni (Director, Psychoinformatics Lab, University of Texas at Austin) provides a forthright assessment of what the future of experimental psychology has in store: “psychological scientists will have better data, better tools, and more reliable methods of aggregation and evaluation”.

Whatever the future of experimental psychology looks like, we at iMotions aim to keep providing all the tools needed to carry out rigorous experimental psychology research.

I hope you’ve enjoyed reading this introduction to experimental psychology. If you’d like to get an even closer look at the background and research within this field, then download our free guide to human behavior below.

Free 52-page Human Behavior Guide

For Beginners and Intermediates

  • Get accessible and comprehensive walkthrough
  • Valuable human behavior research insight
  • Learn how to take your research to the next level

what is experimental research in psychology

[1] Shiraev, E. (2015). A history of psychology . Thousand Oaks, CA: SAGE Publications.

[2] Schultz, D. P., & Schultz, S. E. (2011). A History of Modern Psychology . Cengage, Canada.

[3] Watson, J.B.; Rayner, R. (1920). “Conditioned emotional reactions”. Journal of Experimental Psychology . 3 (1): 1–14. doi:10.1037/h0069608.

[4] Pavlov, I. P. (1928). Lectures on conditioned reflexes . (Translated by W.H. Gantt) London: Allen and Unwin.

[5] Brewer, C. L. (1991). Perspectives on John B. Watson . In G. A. Kimble, M. Wertheimer, & C. White (Eds.), Portraits of pioneers in psychology (pp. 171–186). Washington, DC: American Psychological Association.

[6] Asch, S.E. (1951). Effects of group pressure on the modification and distortion of judgments . In H. Guetzkow (Ed.), Groups, leadership and men(pp. 177–190). Pittsburgh, PA:Carnegie Press.

[7] Simons, D. and Chabris, C. (1999). Gorillas in our midst: sustained inattentional blindness for dynamic events. Perception , 28(9), pp.1059-1074.

[8] Drew, T., Võ, M. L-H., Wolfe, J. M. (2013). The invisible gorilla strikes again: sustained inattentional blindness in expert observers. Psychological Science, 24 (9):1848–1853. doi: 10.1177/0956797613479386.

Last edited

About the author

See what is next in human behavior research

Follow our newsletter to get the latest insights and events send to your inbox.

Related Posts

what is experimental research in psychology

Technological Influences on Behavior: Insights and Implications

what is experimental research in psychology

Understanding Human Behavior: An Essential Guide

what is experimental research in psychology

Studying Human Behavior: Methods and Insights

what is experimental research in psychology

Human Behavior in Practice: Applications Across Fields

You might also like these.

what is experimental research in psychology

Theoretical Frameworks in Understanding Human Behavior

what is experimental research in psychology

Foundations of Human Behavior

Human Factors and UX

Case Stories

Explore Blog Categories

Best Practice

Collaboration, product guides, product news, research fundamentals, research insights, 🍪 use of cookies.

We are committed to protecting your privacy and only use cookies to improve the user experience.

Chose which third-party services that you will allow to drop cookies. You can always change your cookie settings via the Cookie Settings link in the footer of the website. For more information read our Privacy Policy.

  • gtag This tag is from Google and is used to associate user actions with Google Ad campaigns to measure their effectiveness. Enabling this will load the gtag and allow for the website to share information with Google.
  • Livechat Livechat provides you with direct access to the experts in our office. The service tracks visitors to the website but does not store any information unless consent is given. This service is essential and can not be disabled.
  • Pardot Collects information such as the IP address, browser type, and referring URL. This information is used to create reports on website traffic and track the effectiveness of marketing campaigns.
  • Third-party iFrames Allows you to see thirdparty iFrames.
  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How to Conduct a Psychology Experiment

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

what is experimental research in psychology

Emily is a board-certified science editor who has worked with top digital publishing brands like Voices for Biodiversity, Study.com, GoodTherapy, Vox, and Verywell.

what is experimental research in psychology

Conducting your first psychology experiment can be a long, complicated, and sometimes intimidating process. It can be especially confusing if you are not quite sure where to begin or which steps to take.

Like other sciences, psychology utilizes the  scientific method  and bases conclusions upon empirical evidence. When conducting an experiment, it is important to follow the seven basic steps of the scientific method:

  • Ask a testable question
  • Define your variables
  • Conduct background research
  • Design your experiment
  • Perform the experiment
  • Collect and analyze the data
  • Draw conclusions
  • Share the results with the scientific community

At a Glance

It's important to know the steps of the scientific method if you are conducting an experiment in psychology or other fields. The processes encompasses finding a problem you want to explore, learning what has already been discovered about the topic, determining your variables, and finally designing and performing your experiment. But the process doesn't end there! Once you've collected your data, it's time to analyze the numbers, determine what they mean, and share what you've found.

Find a Research Problem or Question

Picking a research problem can be one of the most challenging steps when you are conducting an experiment. After all, there are so many different topics you might choose to investigate.

Are you stuck for an idea? Consider some of the following:

Investigate a Commonly Held Belief

Folk knowledge is a good source of questions that can serve as the basis for psychological research. For example, many people believe that staying up all night to cram for a big exam can actually hurt test performance.

You could conduct a study to compare the test scores of students who stayed up all night with the scores of students who got a full night's sleep before the exam.

Review Psychology Literature

Published studies are a great source of unanswered research questions. In many cases, the authors will even note the need for further research. Find a published study that you find intriguing, and then come up with some questions that require further exploration.

Think About Everyday Problems

There are many practical applications for psychology research. Explore various problems that you or others face each day, and then consider how you could research potential solutions. For example, you might investigate different memorization strategies to determine which methods are most effective.

Define Your Variables

Variables are anything that might impact the outcome of your study. An operational definition describes exactly what the variables are and how they are measured within the context of your study.

For example, if you were doing a study on the impact of sleep deprivation on driving performance, you would need to operationally define sleep deprivation and driving performance .

An operational definition refers to a precise way that an abstract concept will be measured. For example, you cannot directly observe and measure something like test anxiety . You can, however, use an anxiety scale and assign values based on how many anxiety symptoms a person is experiencing. 

In this example, you might define sleep deprivation as getting less than seven hours of sleep at night. You might define driving performance as how well a participant does on a driving test.

What is the purpose of operationally defining variables? The main purpose is control. By understanding what you are measuring, you can control for it by holding the variable constant between all groups or manipulating it as an independent variable .

Develop a Hypothesis

The next step is to develop a testable hypothesis that predicts how the operationally defined variables are related. In the recent example, the hypothesis might be: "Students who are sleep-deprived will perform worse than students who are not sleep-deprived on a test of driving performance."

Null Hypothesis

In order to determine if the results of the study are significant, it is essential to also have a null hypothesis. The null hypothesis is the prediction that one variable will have no association to the other variable.

In other words, the null hypothesis assumes that there will be no difference in the effects of the two treatments in our experimental and control groups .

The null hypothesis is assumed to be valid unless contradicted by the results. The experimenters can either reject the null hypothesis in favor of the alternative hypothesis or not reject the null hypothesis.

It is important to remember that not rejecting the null hypothesis does not mean that you are accepting the null hypothesis. To say that you are accepting the null hypothesis is to suggest that something is true simply because you did not find any evidence against it. This represents a logical fallacy that should be avoided in scientific research.  

Conduct Background Research

Once you have developed a testable hypothesis, it is important to spend some time doing some background research. What do researchers already know about your topic? What questions remain unanswered?

You can learn about previous research on your topic by exploring books, journal articles, online databases, newspapers, and websites devoted to your subject.

Reading previous research helps you gain a better understanding of what you will encounter when conducting an experiment. Understanding the background of your topic provides a better basis for your own hypothesis.

After conducting a thorough review of the literature, you might choose to alter your own hypothesis. Background research also allows you to explain why you chose to investigate your particular hypothesis and articulate why the topic merits further exploration.

As you research the history of your topic, take careful notes and create a working bibliography of your sources. This information will be valuable when you begin to write up your experiment results.

Select an Experimental Design

After conducting background research and finalizing your hypothesis, your next step is to develop an experimental design. There are three basic types of designs that you might utilize. Each has its own strengths and weaknesses:

Pre-Experimental Design

A single group of participants is studied, and there is no comparison between a treatment group and a control group. Examples of pre-experimental designs include case studies (one group is given a treatment and the results are measured) and pre-test/post-test studies (one group is tested, given a treatment, and then retested).

Quasi-Experimental Design

This type of experimental design does include a control group but does not include randomization. This type of design is often used if it is not feasible or ethical to perform a randomized controlled trial.

True Experimental Design

A true experimental design, also known as a randomized controlled trial, includes both of the elements that pre-experimental designs and quasi-experimental designs lack—control groups and random assignment to groups.

Standardize Your Procedures

In order to arrive at legitimate conclusions, it is essential to compare apples to apples.

Each participant in each group must receive the same treatment under the same conditions.

For example, in our hypothetical study on the effects of sleep deprivation on driving performance, the driving test must be administered to each participant in the same way. The driving course must be the same, the obstacles faced must be the same, and the time given must be the same.

Choose Your Participants

In addition to making sure that the testing conditions are standardized, it is also essential to ensure that your pool of participants is the same.

If the individuals in your control group (those who are not sleep deprived) all happen to be amateur race car drivers while your experimental group (those that are sleep deprived) are all people who just recently earned their driver's licenses, your experiment will lack standardization.

When choosing subjects, there are some different techniques you can use.

Simple Random Sample

In a simple random sample, the participants are randomly selected from a group. A simple random sample can be used to represent the entire population from which the representative sample is drawn.

Drawing a simple random sample can be helpful when you don't know a lot about the characteristics of the population.

Stratified Random Sample

Participants must be randomly selected from different subsets of the population. These subsets might include characteristics such as geographic location, age, sex, race, or socioeconomic status.

Stratified random samples are more complex to carry out. However, you might opt for this method if there are key characteristics about the population that you want to explore in your research.

Conduct Tests and Collect Data

After you have selected participants, the next steps are to conduct your tests and collect the data. Before doing any testing, however, there are a few important concerns that need to be addressed.

Address Ethical Concerns

First, you need to be sure that your testing procedures are ethical . Generally, you will need to gain permission to conduct any type of testing with human participants by submitting the details of your experiment to your school's Institutional Review Board (IRB), sometimes referred to as the Human Subjects Committee.

Obtain Informed Consent

After you have gained approval from your institution's IRB, you will need to present informed consent forms to each participant. This form offers information on the study, the data that will be gathered, and how the results will be used. The form also gives participants the option to withdraw from the study at any point in time.

Once this step has been completed, you can begin administering your testing procedures and collecting the data.

Analyze the Results

After collecting your data, it is time to analyze the results of your experiment. Researchers use statistics to determine if the results of the study support the original hypothesis and if the results are statistically significant.

Statistical significance means that the study's results are unlikely to have occurred simply by chance.

The types of statistical methods you use to analyze your data depend largely on the type of data that you collected. If you are using a random sample of a larger population, you will need to utilize inferential statistics.

These statistical methods make inferences about how the results relate to the population at large.

Because you are making inferences based on a sample, it has to be assumed that there will be a certain margin of error. This refers to the amount of error in your results. A large margin of error means that there will be less confidence in your results, while a small margin of error means that you are more confident that your results are an accurate reflection of what exists in that population.

Share Your Results After Conducting an Experiment

Your final task in conducting an experiment is to communicate your results. By sharing your experiment with the scientific community, you are contributing to the knowledge base on that particular topic.

One of the most common ways to share research results is to publish the study in a peer-reviewed professional journal. Other methods include sharing results at conferences, in book chapters, or academic presentations.

In your case, it is likely that your class instructor will expect a formal write-up of your experiment in the same format required in a professional journal article or lab report :

  • Introduction
  • Tables and figures

What This Means For You

Designing and conducting a psychology experiment can be quite intimidating, but breaking the process down step-by-step can help. No matter what type of experiment you decide to perform, always check with your instructor and your school's institutional review board for permission before you begin.

NOAA SciJinks. What is the scientific method? .

Nestor, PG, Schutt, RK. Research Methods in Psychology . SAGE; 2015.

Andrade C. A student's guide to the classification and operationalization of variables in the conceptualization and eesign of a clinical study: Part 2 .  Indian J Psychol Med . 2021;43(3):265-268. doi:10.1177/0253717621996151

Purna Singh A, Vadakedath S, Kandi V. Clinical research: A review of study designs, hypotheses, errors, sampling types, ethics, and informed consent .  Cureus . 2023;15(1):e33374. doi:10.7759/cureus.33374

Colby College. The Experimental Method .

Leite DFB, Padilha MAS, Cecatti JG. Approaching literature review for academic purposes: The Literature Review Checklist .  Clinics (Sao Paulo) . 2019;74:e1403. doi:10.6061/clinics/2019/e1403

Salkind NJ. Encyclopedia of Research Design . SAGE Publications, Inc.; 2010. doi:10.4135/9781412961288

Miller CJ, Smith SN, Pugatch M. Experimental and quasi-experimental designs in implementation research .  Psychiatry Res . 2020;283:112452. doi:10.1016/j.psychres.2019.06.027

Nijhawan LP, Manthan D, Muddukrishna BS, et. al. Informed consent: Issues and challenges . J Adv Pharm Technol Rese . 2013;4(3):134-140. doi:10.4103/2231-4040.116779

Serdar CC, Cihan M, Yücel D, Serdar MA. Sample size, power and effect size revisited: simplified and practical approaches in pre-clinical, clinical and laboratory studies .  Biochem Med (Zagreb) . 2021;31(1):010502. doi:10.11613/BM.2021.010502

American Psychological Association.  Publication Manual of the American Psychological Association  (7th ed.). Washington DC: The American Psychological Association; 2019.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

what is experimental research in psychology

Can Induced Awe Reduce Anti-Gay Prejudice in Heterosexual Adults and Does the Need for Closure Moderate this Effect?

Article sidebar.

what is experimental research in psychology

Main Article Content

With sexual prejudice continuing to be widely prevalent and seriously harmful, there is a need to find ways to reduce anti-gay prejudice (AGP). This online experimental study examined if a novel intervention, induced awe, can reduce AGP; if those high in need for closure (NFC) show higher AGP; and if NFC moderates the effect of awe on AGP. In total, 154 heterosexual adults completed the Need for Closure Scale (Roets & Van Hiel, 2011) before being randomly assigned to one of three emotion inducement interventions – 1) watching a 4.43 min long video of the target emotion, awe, 2) watching the comparison emotion, amusement, or 3) watching a neutral emotion as a control. Post intervention, the participants completed the explicit measures Homosexuality Attitudes Scale (HAS; Kite & Deaux, 1986). Data were analysed using a 2 (NFC) x 3 (emotion type) independent factorial ANOVA. None of the three hypotheses were supported since there were no main effects of awe or NFC on AGP, and no interaction effect of awe and NFC on AGP. Key implications of these results were 1) awe inducement does not change prejudicial attitudes at an explicit level (Dale et al., 2020), 2) factors beyond NFC, like Right Wing Authoritarianism (RWA) and Social Dominance Orientation (SDO), may influence the effectiveness of awe in reducing prejudice, and 3) questions were raised about certain boundaries for awe’s effectiveness. Methodological modifications suggested for future research include using implicit measures or veiled elicitation methods for authentic measurement of AGP, employing more potent awe elicitors, and assessing the mediating role of RWA and SDO on the effect of NFC on AGP.

Article Details

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License .

  • Skip to main content

We use cookies

Necessary cookies.

Necessary cookies enable core functionality. The website cannot function properly without these cookies, and can only be disabled by changing your browser preferences.

Analytics cookies

Analytical cookies help us improve our website. We use Google Analytics. All data is anonymised.

Clarity helps us to understand our users’ behaviour by visually representing their clicks, taps and scrolling. All data is anonymised.

Privacy policy

  • Undergraduate study
  • 2024 Degree programmes A‑Z
  • More about the course
  • More about us
  • What's in it for you?
  • Student Testimonials

What can I expect in first year?

Level-1 is an introductory year taught over two courses (1A and 1B). We have designed the courses to make them suitable for those of you wishing to study psychology in depth, as well as those who simply wish to try it for a year out of interest. We use continuous assessment to some extent and have an excellent pass rate.

Courses in Level-1 currently include:

•    Introduction – scope of psychology

•    Biology – brain, learning and motivation

•    Experimental design and statistics

•    Cognitive neuroscience

•    Memory, language and thinking

•    Perception and visual cognition

•    Abnormal psychology

•    Human development

•    Individual differences

•    Social I – biology and social cognition

•    Social cognition and social interaction.

Apart from attending lectures, you will also have a tutorial each week, and there is a practical element involving a series of experiments that are mostly carried out within a suite of dedicated computer spaces in our well-equipped labs. There are always assistants on hand to help out with these projects.

What can I expect in second year?

Level-2 is also taught over two courses (2A and 2B) and is designed to add new areas of learning as well as taking the topics of Level-1 to a greater depth. It is still suitable, however, for those of you who intend to take psychology no further. There is no single text, but each lecturer will recommend appropriate reading material.

Current courses include:

•    Applying psychology

•    Cognitive psychology

•    Developmental psychology

•    Experimental design

•    Psychobiology

•    Social psychology.

Once again, there are regular tutorials and a practical course where experiments are carried out and reported on.

What happens next?

If you successfully meet our criteria for success in the courses in first and second years, you may progress to Honours (years three and four). Level-3 is the first part of a two-year Honours programme that provides a strong background to important areas within psychology and allows us to build on our research strengths. The first part of your final degree exams will be taken at the end of this year.

The current courses are:

•    Cognition

•    Concepts and historical issues in psychology

•    Professional skills (employability)

•    Social

•    Statistics

•    Physiological psychology.

In small groups you will take tutorials and complete substantial research projects. As individuals you will complete two critical reviews of research literature. Early in the session there is also a residential reading party, which provides an excellent opportunity to meet your fellow students in a relaxed and friendly atmosphere while undertaking interesting activities.

In fourth year your level of choice increases markedly. You will choose from a large list of in-depth, advanced undergraduate strands in psychology.  You will also complete a critical review of research literature in an area of your choosing, and a major piece of research. This research may be lab-based (eg using one of our eye- trackers or specialised computer software) or carried out in the ‘real world’ of organisations, schools or hospitals. The second part of your final degree exams will be taken at the end of this year.

Photo Credit: JD Howell

IMAGES

  1. Experimental Psychology: 10 Examples & Definition (2023)

    what is experimental research in psychology

  2. What is Experimental Method in psychology?

    what is experimental research in psychology

  3. PPT

    what is experimental research in psychology

  4. The Scientific Method

    what is experimental research in psychology

  5. PPT

    what is experimental research in psychology

  6. Experimental Methods in Psychology (AQA A level Psychology)

    what is experimental research in psychology

VIDEO

  1. Experimental research #research methodology #psychology #variables #ncertpsychology #lecture28

  2. Types of Experimental Research Design (MPC-005)

  3. What is Experimental Psychology ? Urdu / Hindi

  4. Difference Between Experimental & Non-Experimental Research in Hindi

  5. What is experimental research design? (4 of 11)

  6. Experimental and Correlational Research

COMMENTS

  1. Experimental Method In Psychology

    1. Lab Experiment. A laboratory experiment in psychology is a research method in which the experimenter manipulates one or more independent variables and measures the effects on the dependent variable under controlled conditions. A laboratory experiment is conducted under highly controlled conditions (not necessarily a laboratory) where ...

  2. How the Experimental Method Works in Psychology

    The experimental method involves manipulating one variable to determine if this causes changes in another variable. This method relies on controlled research methods and random assignment of study subjects to test a hypothesis. For example, researchers may want to learn how different visual patterns may impact our perception.

  3. Experimental psychology

    The research methodologies employed in experimental psychology utilize techniques in research to seek to uncover new knowledge or validate existing claims. Typically, this entails a number of stages, including selecting a sample, gathering data from this sample, and evaluating this data.

  4. Research Methods In Psychology

    Olivia Guy-Evans, MSc. Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

  5. 6.1 Experiment Basics

    Experiments have two fundamental features. The first is that the researchers manipulate, or systematically vary, the level of the independent variable. The different levels of the independent variable are called conditions. For example, in Darley and Latané's experiment, the independent variable was the number of witnesses that participants ...

  6. Experimental Design: Types, Examples & Methods

    Three types of experimental designs are commonly used: 1. Independent Measures. Independent measures design, also known as between-groups, is an experimental design where different participants are used in each condition of the independent variable. This means that each condition of the experiment includes a different group of participants.

  7. Psychological Experimental Design

    Experimental design in psychology is a focal point of psychological research, as a well-structured experimental design serves not only as the foundation of the experimental process but also as a prerequisite for processing results. It is also a crucial assurance for obtaining the anticipated outcomes in scientific research.

  8. How Does Experimental Psychology Study Behavior?

    The experimental method in psychology helps us learn more about how people think and why they behave the way they do. Experimental psychologists can research a variety of topics using many different experimental methods. Each one contributes to what we know about the mind and human behavior. 4 Sources.

  9. Experimental psychology

    experimental psychology, a method of studying psychological phenomena and processes.The experimental method in psychology attempts to account for the activities of animals (including humans) and the functional organization of mental processes by manipulating variables that may give rise to behaviour; it is primarily concerned with discovering laws that describe manipulable relationships.

  10. 6.2 Experimental Design

    Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too. In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition ...

  11. Experimental Research

    Experimental Research. In the late 1960s social psychologists John Darley and Bibb Latané proposed a counter-intuitive hypothesis. The more witnesses there are to an accident or a crime, the less likely any of them is to help the victim (Darley & Latané, 1968) [1]. They also suggested the theory that this phenomenon occurs because each ...

  12. Experiment Basics

    Experimental Research. 23 Experiment Basics ... right-handed psychology majors. The obvious downside to this approach is that it would lower the external validity of the study—in particular, the extent to which the results can be generalized beyond the people actually studied. For example, it might be unclear whether results obtained with a ...

  13. The Use of Research Methods in Psychological Research: A Systematised

    Introduction. Psychology is an ever-growing and popular field (Gough and Lyons, 2016; Clay, 2017).Due to this growth and the need for science-based research to base health decisions on (Perestelo-Pérez, 2013), the use of research methods in the broad field of psychology is an essential point of investigation (Stangor, 2011; Aanstoos, 2014).Research methods are therefore viewed as important ...

  14. Overview of the Types of Research in Psychology

    Experimental research is a type of research that investigates the effect of one or more variables on one or more outcome variables. It can help researchers learn more about the way people think, feel, and behave. Learn the definition, examples, and distinctions of experimental research in psychology.

  15. What is Experimental Psychology

    Experimental psychology is a science of psychology that uses empirical research methods to explore and better understand behavior. It covers a wide range of topics such as sensation, perception, attention, memory, cognition, and emotion. Experimental psychology can be applied to various fields such as clinical, educational, forensic, and social psychology. Learn more about the history, methods, and examples of experimental psychology.

  16. Experimental Psychology Studies Humans and Animals

    Experimental psychologists are interested in exploring theoretical questions, often by creating a hypothesis and then setting out to prove or disprove it through experimentation. They study a wide range of behavioral topics among humans and animals, including sensation, perception, attention, memory, cognition and emotion.

  17. Experimental Psychology: 10 Examples & Definition

    Image 1. Definition: Experimental psychology is a branch of psychology that focuses on conducting systematic and controlled experiments to study human behavior and cognition. Overview: Experimental psychology aims to gather empirical evidence and explore cause-and-effect relationships between variables.

  18. Chapter 5: Experimental Research

    The research that Darley and Latané conducted was a particular kind of study called an experiment. Experiments are used to determine not only whether there is a meaningful relationship between two variables but also whether the relationship is a causal one that is supported by statistical analysis. For this reason, experiments are one of the ...

  19. Experimental Research

    Experimental Research. As you've learned, the only way to establish that there is a cause-and-effect relationship between two variables is to conduct a scientific experiment. Experiment has a different meaning in the scientific context than in everyday life. In everyday conversation, we often use it to describe trying something for the first ...

  20. Experimental Research in Psychology

    Experimental research is a scientific method of gathering data whereby the one conducting the research is able to manipulate the independent variable. Experimental research psychology is the act ...

  21. What is Experimental Psychology?

    Experimental psychology is concerned with testing theories of human thoughts, feelings, actions, and beyond - any aspect of being human that involves the mind. This is a broad category that features many branches within it (e.g. behavioral psychology, cognitive psychology). Below, we will go through a brief history of experimental psychology ...

  22. Experimental Psychology (PHD)

    A Ph.D. in experimental psychology offers you the opportunity to: Receive training in statistics and research methods; Learn to conduct psychological research; Test hypotheses through experimental and non-experimental methods; Become familiar with a variety of sub-disciplines including educational, cognitive, social and developmental psychology

  23. Conducting an Experiment in Psychology

    When conducting an experiment, it is important to follow the seven basic steps of the scientific method: Ask a testable question. Define your variables. Conduct background research. Design your experiment. Perform the experiment. Collect and analyze the data. Draw conclusions.

  24. New Insights on Expert Opinion About Eyewitness Memory Research

    Courts and other stakeholders of eyewitness research rely on the expert opinions reflected in these surveys to make informed decisions. However, the last survey of this sort was published more than 20 years ago, and the science of eyewitness memory has developed since that time. ... Journal of Experimental Psychology: Learning, Memory, and ...

  25. Can Induced Awe Reduce Anti-Gay Prejudice in Heterosexual Adults and

    With sexual prejudice continuing to be widely prevalent and seriously harmful, there is a need to find ways to reduce anti-gay prejudice (AGP). This online experimental study examined if a novel intervention, induced awe, can reduce AGP; if those high in need for closure (NFC) show higher AGP; and if NFC moderates the effect of awe on AGP.

  26. Undergraduate study

    Psychology What can I expect in first year? Level-1 is an introductory year taught over two courses (1A and 1B). We have designed the courses to make them suitable for those of yo