2. variables
3. variables
4. variables
5. variables
6. variables
7. variables
8. variables
The simplest way to understand a variable is as any characteristic or attribute that can experience change or vary over time or context – hence the name “variable”. For example, the dosage of a particular medicine could be classified as a variable, as the amount can vary (i.e., a higher dose or a lower dose). Similarly, gender, age or ethnicity could be considered demographic variables, because each person varies in these respects.
Within research, especially scientific research, variables form the foundation of studies, as researchers are often interested in how one variable impacts another, and the relationships between different variables. For example:
As you can see, variables are often used to explain relationships between different elements and phenomena. In scientific studies, especially experimental studies, the objective is often to understand the causal relationships between variables. In other words, the role of cause and effect between variables. This is achieved by manipulating certain variables while controlling others – and then observing the outcome. But, we’ll get into that a little later…
Variables can be a little intimidating for new researchers because there are a wide variety of variables, and oftentimes, there are multiple labels for the same thing. To lay a firm foundation, we’ll first look at the three main types of variables, namely:
Simply put, the independent variable is the “ cause ” in the relationship between two (or more) variables. In other words, when the independent variable changes, it has an impact on another variable.
For example:
It’s useful to know that independent variables can go by a few different names, including, explanatory variables (because they explain an event or outcome) and predictor variables (because they predict the value of another variable). Terminology aside though, the most important takeaway is that independent variables are assumed to be the “cause” in any cause-effect relationship. As you can imagine, these types of variables are of major interest to researchers, as many studies seek to understand the causal factors behind a phenomenon.
While the independent variable is the “ cause ”, the dependent variable is the “ effect ” – or rather, the affected variable . In other words, the dependent variable is the variable that is assumed to change as a result of a change in the independent variable.
Keeping with the previous example, let’s look at some dependent variables in action:
In scientific studies, researchers will typically pay very close attention to the dependent variable (or variables), carefully measuring any changes in response to hypothesised independent variables. This can be tricky in practice, as it’s not always easy to reliably measure specific phenomena or outcomes – or to be certain that the actual cause of the change is in fact the independent variable.
As the adage goes, correlation is not causation . In other words, just because two variables have a relationship doesn’t mean that it’s a causal relationship – they may just happen to vary together. For example, you could find a correlation between the number of people who own a certain brand of car and the number of people who have a certain type of job. Just because the number of people who own that brand of car and the number of people who have that type of job is correlated, it doesn’t mean that owning that brand of car causes someone to have that type of job or vice versa. The correlation could, for example, be caused by another factor such as income level or age group, which would affect both car ownership and job type.
To confidently establish a causal relationship between an independent variable and a dependent variable (i.e., X causes Y), you’ll typically need an experimental design , where you have complete control over the environmen t and the variables of interest. But even so, this doesn’t always translate into the “real world”. Simply put, what happens in the lab sometimes stays in the lab!
As an alternative to pure experimental research, correlational or “ quasi-experimental ” research (where the researcher cannot manipulate or change variables) can be done on a much larger scale more easily, allowing one to understand specific relationships in the real world. These types of studies also assume some causality between independent and dependent variables, but it’s not always clear. So, if you go this route, you need to be cautious in terms of how you describe the impact and causality between variables and be sure to acknowledge any limitations in your own research.
In an experimental design, a control variable (or controlled variable) is a variable that is intentionally held constant to ensure it doesn’t have an influence on any other variables. As a result, this variable remains unchanged throughout the course of the study. In other words, it’s a variable that’s not allowed to vary – tough life 🙂
As we mentioned earlier, one of the major challenges in identifying and measuring causal relationships is that it’s difficult to isolate the impact of variables other than the independent variable. Simply put, there’s always a risk that there are factors beyond the ones you’re specifically looking at that might be impacting the results of your study. So, to minimise the risk of this, researchers will attempt (as best possible) to hold other variables constant . These factors are then considered control variables.
Some examples of variables that you may need to control include:
Which specific variables need to be controlled for will vary tremendously depending on the research project at hand, so there’s no generic list of control variables to consult. As a researcher, you’ll need to think carefully about all the factors that could vary within your research context and then consider how you’ll go about controlling them. A good starting point is to look at previous studies similar to yours and pay close attention to which variables they controlled for.
Of course, you won’t always be able to control every possible variable, and so, in many cases, you’ll just have to acknowledge their potential impact and account for them in the conclusions you draw. Every study has its limitations , so don’t get fixated or discouraged by troublesome variables. Nevertheless, always think carefully about the factors beyond what you’re focusing on – don’t make assumptions!
As we mentioned, independent, dependent and control variables are the most common variables you’ll come across in your research, but they’re certainly not the only ones you need to be aware of. Next, we’ll look at a few “secondary” variables that you need to keep in mind as you design your research.
Let’s jump into it…
A moderating variable is a variable that influences the strength or direction of the relationship between an independent variable and a dependent variable. In other words, moderating variables affect how much (or how little) the IV affects the DV, or whether the IV has a positive or negative relationship with the DV (i.e., moves in the same or opposite direction).
For example, in a study about the effects of sleep deprivation on academic performance, gender could be used as a moderating variable to see if there are any differences in how men and women respond to a lack of sleep. In such a case, one may find that gender has an influence on how much students’ scores suffer when they’re deprived of sleep.
It’s important to note that while moderators can have an influence on outcomes , they don’t necessarily cause them ; rather they modify or “moderate” existing relationships between other variables. This means that it’s possible for two different groups with similar characteristics, but different levels of moderation, to experience very different results from the same experiment or study design.
Mediating variables are often used to explain the relationship between the independent and dependent variable (s). For example, if you were researching the effects of age on job satisfaction, then education level could be considered a mediating variable, as it may explain why older people have higher job satisfaction than younger people – they may have more experience or better qualifications, which lead to greater job satisfaction.
Mediating variables also help researchers understand how different factors interact with each other to influence outcomes. For instance, if you wanted to study the effect of stress on academic performance, then coping strategies might act as a mediating factor by influencing both stress levels and academic performance simultaneously. For example, students who use effective coping strategies might be less stressed but also perform better academically due to their improved mental state.
In addition, mediating variables can provide insight into causal relationships between two variables by helping researchers determine whether changes in one factor directly cause changes in another – or whether there is an indirect relationship between them mediated by some third factor(s). For instance, if you wanted to investigate the impact of parental involvement on student achievement, you would need to consider family dynamics as a potential mediator, since it could influence both parental involvement and student achievement simultaneously.
A confounding variable (also known as a third variable or lurking variable ) is an extraneous factor that can influence the relationship between two variables being studied. Specifically, for a variable to be considered a confounding variable, it needs to meet two criteria:
Some common examples of confounding variables include demographic factors such as gender, ethnicity, socioeconomic status, age, education level, and health status. In addition to these, there are also environmental factors to consider. For example, air pollution could confound the impact of the variables of interest in a study investigating health outcomes.
Naturally, it’s important to identify as many confounding variables as possible when conducting your research, as they can heavily distort the results and lead you to draw incorrect conclusions . So, always think carefully about what factors may have a confounding effect on your variables of interest and try to manage these as best you can.
Latent variables are unobservable factors that can influence the behaviour of individuals and explain certain outcomes within a study. They’re also known as hidden or underlying variables , and what makes them rather tricky is that they can’t be directly observed or measured . Instead, latent variables must be inferred from other observable data points such as responses to surveys or experiments.
For example, in a study of mental health, the variable “resilience” could be considered a latent variable. It can’t be directly measured , but it can be inferred from measures of mental health symptoms, stress, and coping mechanisms. The same applies to a lot of concepts we encounter every day – for example:
One way in which we overcome the challenge of measuring the immeasurable is latent variable models (LVMs). An LVM is a type of statistical model that describes a relationship between observed variables and one or more unobserved (latent) variables. These models allow researchers to uncover patterns in their data which may not have been visible before, thanks to their complexity and interrelatedness with other variables. Those patterns can then inform hypotheses about cause-and-effect relationships among those same variables which were previously unknown prior to running the LVM. Powerful stuff, we say!
In the world of scientific research, there’s no shortage of variable types, some of which have multiple names and some of which overlap with each other. In this post, we’ve covered some of the popular ones, but remember that this is not an exhaustive list .
To recap, we’ve explored:
If you’re still feeling a bit lost and need a helping hand with your research project, check out our 1-on-1 coaching service , where we guide you through each step of the research journey. Also, be sure to check out our free dissertation writing course and our collection of free, fully-editable chapter templates .
This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...
Very informative, concise and helpful. Thank you
Helping information.Thanks
practical and well-demonstrated
Very helpful and insightful
Your email address will not be published. Required fields are marked *
Save my name, email, and website in this browser for the next time I comment.
Have you ever wondered how scientists make discoveries and how researchers come to understand the world around us? A crucial tool in their kit is the concept of the independent variable, which helps them delve into the mysteries of science and everyday life.
An independent variable is a condition or factor that researchers manipulate to observe its effect on another variable, known as the dependent variable. In simpler terms, it’s like adjusting the dials and watching what happens! By changing the independent variable, scientists can see if and how it causes changes in what they are measuring or observing, helping them make connections and draw conclusions.
In this article, we’ll explore the fascinating world of independent variables, journey through their history, examine theories, and look at a variety of examples from different fields.
Once upon a time, in a world thirsty for understanding, people observed the stars, the seas, and everything in between, seeking to unlock the mysteries of the universe.
The story of the independent variable begins with a quest for knowledge, a journey taken by thinkers and tinkerers who wanted to explain the wonders and strangeness of the world.
The seeds of the idea of independent variables were sown by Sir Francis Galton , an English polymath, in the 19th century. Galton wore many hats—he was a psychologist, anthropologist, meteorologist, and a statistician!
It was his diverse interests that led him to explore the relationships between different factors and their effects. Galton was curious—how did one thing lead to another, and what could be learned from these connections?
As Galton delved into the world of statistical theories , the concept of independent variables started taking shape.
He was interested in understanding how characteristics, like height and intelligence, were passed down through generations.
Galton’s work laid the foundation for later thinkers to refine and expand the concept, turning it into an invaluable tool for scientific research.
After Galton’s pioneering work, the concept of the independent variable continued to evolve and grow. Scientists and researchers from various fields adopted and adapted it, finding new ways to use it to make sense of the world.
They discovered that by manipulating one factor (the independent variable), they could observe changes in another (the dependent variable), leading to groundbreaking insights and discoveries.
Through the years, the independent variable became a cornerstone in experimental design . Researchers in fields like physics, biology, psychology, and sociology used it to test hypotheses, develop theories, and uncover the laws that govern our universe.
The idea that originated from Galton’s curiosity had bloomed into a universal key, unlocking doors to knowledge across disciplines.
Today, the independent variable stands tall as a pillar of scientific research. It helps scientists and researchers ask critical questions, test their ideas, and find answers. Without independent variables, we wouldn’t have many of the advancements and understandings that we take for granted today.
The independent variable plays a starring role in experiments, helping us learn about everything from the smallest particles to the vastness of space. It helps researchers create vaccines, understand social behaviors, explore ecological systems, and even develop new technologies.
In the upcoming sections, we’ll dive deeper into what independent variables are, how they work, and how they’re used in various fields.
Together, we’ll uncover the magic of this scientific concept and see how it continues to shape our understanding of the world around us.
Embarking on the captivating journey of scientific exploration requires us to grasp the essential terms and ideas. It's akin to a treasure hunter mastering the use of a map and compass.
In our adventure through the realm of independent variables, we’ll delve deeper into some fundamental concepts and definitions to help us navigate this exciting world.
In the grand tapestry of research, variables are the gems that researchers seek. They’re elements, characteristics, or behaviors that can shift or vary in different circumstances.
Picture them as the myriad of ingredients in a chef’s kitchen—each variable can be adjusted or modified to create a myriad of dishes, each with a unique flavor!
Understanding variables is essential as they form the core of every scientific experiment and observational study.
Independent Variable The star of our story, the independent variable, is the one that researchers change or control to study its effects. It’s like a chef experimenting with different spices to see how each one alters the taste of the soup. The independent variable is the catalyst, the initial spark that sets the wheels of research in motion.
Dependent Variable The dependent variable is the outcome we observe and measure . It’s the altered flavor of the soup that results from the chef’s culinary experiments. This variable depends on the changes made to the independent variable, hence the name!
Observing how the dependent variable reacts to changes helps scientists draw conclusions and make discoveries.
Control Variable Control variables are the unsung heroes of scientific research. They’re the constants, the elements that researchers keep the same to ensure the integrity of the experiment.
Imagine if our chef used a different type of broth each time he experimented with spices—the results would be all over the place! Control variables keep the experiment grounded and help researchers be confident in their findings.
Confounding Variables Imagine a hidden rock in a stream, changing the water’s flow in unexpected ways. Confounding variables are similar—they are external factors that can sneak into experiments and influence the outcome , adding twists to our scientific story.
These variables can blur the relationship between the independent and dependent variables, making the results of the study a bit puzzly. Detecting and controlling these hidden elements helps researchers ensure the accuracy of their findings and reach true conclusions.
There are of course other types of variables, and different ways to manipulate them called " schedules of reinforcement ," but we won't get into that too much here.
Manipulation When researchers manipulate the independent variable, they are orchestrating a symphony of cause and effect. They’re adjusting the strings, the brass, the percussion, observing how each change influences the melody—the dependent variable.
This manipulation is at the heart of experimental research. It allows scientists to explore relationships, unravel patterns, and unearth the secrets hidden within the fabric of our universe.
Observation With every tweak and adjustment made to the independent variable, researchers are like seasoned detectives, observing the dependent variable for changes, collecting clues, and piecing together the puzzle.
Observing the effects and changes that occur helps them deduce relationships, formulate theories, and expand our understanding of the world. Every observation is a step towards solving the mysteries of nature and human behavior.
Characteristics Identifying an independent variable in the vast landscape of research can seem daunting, but fear not! Independent variables have distinctive characteristics that make them stand out.
They’re the elements that are deliberately changed or controlled in an experiment to study their effects on the dependent variable. Recognizing these characteristics is like learning to spot footprints in the sand—it leads us to the heart of the discovery!
In Different Types of Research The world of research is diverse and varied, and the independent variable dons many guises! In the field of medicine, it might manifest as the dosage of a drug administered to patients.
In psychology, it could take the form of different learning methods applied to study memory retention. In each field, identifying the independent variable correctly is the golden key that unlocks the treasure trove of knowledge and insights.
As we forge ahead on our enlightening journey, equipped with a deeper understanding of independent variables and their roles, we’re ready to delve into the intricate theories and diverse examples that underscore their significance.
Now that we’re acquainted with the basic concepts and have the tools to identify independent variables, let’s dive into the fascinating ocean of theories and frameworks.
These theories are like ancient scrolls, providing guidelines and blueprints that help scientists use independent variables to uncover the secrets of the universe.
What is it and How Does it Work? The scientific method is like a super-helpful treasure map that scientists use to make discoveries. It has steps we follow: asking a question, researching, guessing what will happen (that's a hypothesis!), experimenting, checking the results, figuring out what they mean, and telling everyone about it.
Our hero, the independent variable, is the compass that helps this adventure go the right way!
How Independent Variables Lead the Way In the scientific method, the independent variable is like the captain of a ship, leading everyone through unknown waters.
Scientists change this variable to see what happens and to learn new things. It’s like having a compass that points us towards uncharted lands full of knowledge!
The Basics of Building Constructing an experiment is like building a castle, and the independent variable is the cornerstone. It’s carefully chosen and manipulated to see how it affects the dependent variable. Researchers also identify control and confounding variables, ensuring the castle stands strong, and the results are reliable.
Keeping Everything in Check In every experiment, maintaining control is key to finding the treasure. Scientists use control variables to keep the conditions consistent, ensuring that any changes observed are truly due to the independent variable. It’s like ensuring the castle’s foundation is solid, supporting the structure as it reaches for the sky.
Making Educated Guesses Before they start experimenting, scientists make educated guesses called hypotheses . It’s like predicting which X marks the spot of the treasure! It often includes the independent variable and the expected effect on the dependent variable, guiding researchers as they navigate through the experiment.
Independent Variables in the Spotlight When testing these guesses, the independent variable is the star of the show! Scientists change and watch this variable to see if their guesses were right. It helps them figure out new stuff and learn more about the world around us!
Figuring Out Relationships After the experimenting is done, it’s time for scientists to crack the code! They use statistics to understand how the independent and dependent variables are related and to uncover the hidden stories in the data.
Experimenters have to be careful about how they determine the validity of their findings, which is why they use statistics. Something called "experimenter bias" can get in the way of having true (valid) results, because it's basically when the experimenter influences the outcome based on what they believe to be true (or what they want to be true!).
How Important are the Discoveries? Through statistical analysis, scientists determine the significance of their findings. It’s like discovering if the treasure found is made of gold or just shiny rocks. The analysis helps researchers know if the independent variable truly had an effect, contributing to the rich tapestry of scientific knowledge.
As we uncover more about how theories and frameworks use independent variables, we start to see how awesome they are in helping us learn more about the world. But we’re not done yet!
Up next, we’ll look at tons of examples to see how independent variables work their magic in different areas.
Independent variables take on many forms, showcasing their versatility in a range of experiments and studies. Let’s uncover how they act as the protagonists in numerous investigations and learning quests!
1) plant growth.
Consider an experiment aiming to observe the effect of varying water amounts on plant height. In this scenario, the amount of water given to the plants is the independent variable!
Suppose we are curious about the time it takes for water to freeze at different temperatures. The temperature of the freezer becomes the independent variable as we adjust it to observe the results!
Have you ever observed how shadows change? In an experiment, adjusting the light angle to observe its effect on an object’s shadow makes the angle of light the independent variable!
In medical studies, determining how varying medicine dosages influence a patient’s recovery is essential. Here, the dosage of the medicine administered is the independent variable!
Researchers might examine the impact of different exercise forms on individuals’ health. The various exercise forms constitute the independent variable in this study!
Have you pondered how the sleep duration affects your well-being the following day? In such research, the hours of sleep serve as the independent variable!
Psychologists might investigate how diverse study methods influence test outcomes. Here, the different study methods adopted by students are the independent variable!
Have you experienced varied emotions with different music genres? The genre of music played becomes the independent variable when researching its influence on emotions!
Suppose researchers are exploring how room colors affect individuals’ emotions. In this case, the room colors act as the independent variable!
10) rainfall and plant life.
Environmental scientists may study the influence of varying rainfall levels on vegetation. In this instance, the amount of rainfall is the independent variable!
Examining how temperature variations affect animal behavior is fascinating. Here, the varying temperatures serve as the independent variable!
Investigating the effects of different pollution levels on air quality is crucial. In such studies, the pollution level is the independent variable!
Researchers might explore how varying internet speeds impact work productivity. In this exploration, the internet speed is the independent variable!
Examining how different devices affect user experience is interesting. Here, the type of device used is the independent variable!
Suppose a study aims to determine how different software versions influence system performance. The software version becomes the independent variable!
Educators might investigate the effect of varied teaching styles on student engagement. In such a study, the teaching style is the independent variable!
Researchers could explore how different class sizes influence students’ learning. Here, the class size is the independent variable!
Examining the relationship between the frequency of homework assignments and academic success is essential. The frequency of homework becomes the independent variable!
Astronomers might study how different telescopes affect celestial observation. In this scenario, the telescope type is the independent variable!
Investigating the influence of varying light pollution levels on star visibility is intriguing. Here, the level of light pollution is the independent variable!
Suppose a study explores how observation duration affects the detail captured in astronomical images. The duration of observation serves as the independent variable!
Sociologists may examine how the size of a community influences social interactions. In this research, the community size is the independent variable!
Investigating the effect of diverse cultural exposure on social tolerance is vital. Here, the level of cultural exposure is the independent variable!
Researchers could explore how different economic statuses impact educational achievements. In such studies, economic status is the independent variable!
Sports scientists might study how varying training intensities affect athletes’ performance. In this case, the training intensity is the independent variable!
Examining the relationship between different sports equipment and player safety is crucial. Here, the type of equipment used is the independent variable!
Suppose researchers are investigating how the size of a sports team influences game strategy. The team size becomes the independent variable!
Nutritionists may explore the impact of various diets on individuals’ health. In this exploration, the type of diet followed is the independent variable!
Investigating how different caloric intakes influence weight change is essential. In such a study, the caloric intake is the independent variable!
Researchers could examine how consuming a variety of foods affects nutrient absorption. Here, the variety of foods consumed is the independent variable!
Isn't it fantastic how independent variables play such an essential part in so many studies? But the excitement doesn't stop there!
Now, let’s explore how findings from these studies, led by independent variables, make a big splash in the real world and improve our daily lives!
31) treatment optimization.
By studying different medicine dosages and treatment methods as independent variables, doctors can figure out the best ways to help patients recover quicker and feel better. This leads to more effective medicines and treatment plans!
Researching the effects of sleep, exercise, and diet helps health experts give us advice on living healthier lives. By changing these independent variables, scientists uncover the secrets to feeling good and staying well!
33) speeding up the internet.
When scientists explore how different internet speeds affect our online activities, they’re able to develop technologies to make the internet faster and more reliable. This means smoother video calls and quicker downloads!
By examining how we interact with various devices and software, researchers can design technology that’s easier and more enjoyable to use. This leads to cooler gadgets and more user-friendly apps!
35) enhancing learning.
Investigating different teaching styles, class sizes, and study methods helps educators discover what makes learning fun and effective. This research shapes classrooms, teaching methods, and even homework!
By studying how students with diverse needs respond to different support strategies, educators can create personalized learning experiences. This means every student gets the help they need to succeed!
37) conserving nature.
Researching how rainfall, temperature, and pollution affect the environment helps scientists suggest ways to protect our planet. By studying these independent variables, we learn how to keep nature healthy and thriving!
Scientists studying the effects of pollution and human activities on climate change are leading the way in finding solutions. By exploring these independent variables, we can develop strategies to combat climate change and protect the Earth!
39) building stronger communities.
Sociologists studying community size, cultural exposure, and economic status help us understand what makes communities happy and united. This knowledge guides the development of policies and programs for stronger societies!
By exploring how exposure to diverse cultures affects social tolerance, researchers contribute to fostering more inclusive and harmonious societies. This helps build a world where everyone is respected and valued!
41) optimizing athlete training.
Sports scientists studying training intensity, equipment type, and team size help athletes reach their full potential. This research leads to better training programs, safer equipment, and more exciting games!
By investigating how different game strategies are influenced by various team compositions, researchers contribute to the evolution of sports. This means more thrilling competitions and matches for us to enjoy!
43) guiding healthy eating.
Nutritionists researching diet types, caloric intake, and food variety help us understand what foods are best for our bodies. This knowledge shapes dietary guidelines and helps us make tasty, yet nutritious, meal choices!
By studying the effects of different nutrients and diets, researchers educate us on maintaining a balanced diet. This fosters a greater awareness of nutritional well-being and encourages healthier eating habits!
As we journey through these real-world applications, we witness the incredible impact of studies featuring independent variables. The exploration doesn’t end here, though!
Let’s continue our adventure and see how we can identify independent variables in our own observations and inquiries! Keep your curiosity alive, and let’s delve deeper into the exciting realm of independent variables!
So, we’ve seen how independent variables star in many studies, but how about spotting them in our everyday life?
Recognizing independent variables can be like a treasure hunt – you never know where you might find one! Let’s uncover some tips and tricks to identify these hidden gems in various situations.
One of the best ways to spot an independent variable is by asking questions! If you’re curious about something, ask yourself, “What am I changing or manipulating in this situation?” The thing you’re changing is likely the independent variable!
For example, if you’re wondering whether the amount of sunlight affects how quickly your laundry dries, the sunlight amount is your independent variable!
Keep your eyes peeled and observe the world around you! By watching how changes in one thing (like the amount of rain) affect something else (like the height of grass), you can identify the independent variable.
In this case, the amount of rain is the independent variable because it’s what’s changing!
Get hands-on and conduct your own experiments! By changing one thing and observing the results, you’re identifying the independent variable.
If you’re growing plants and decide to water each one differently to see the effects, the amount of water is your independent variable!
In everyday scenarios, independent variables are all around!
When you adjust the temperature of your oven to bake cookies, the oven temperature is the independent variable.
Or if you’re deciding how much time to spend studying for a test, the study time is your independent variable!
Keep being curious and asking “What if?” questions! By exploring different possibilities and wondering how changing one thing could affect another, you’re on your way to identifying independent variables.
If you’re curious about how the color of a room affects your mood, the room color is the independent variable!
Don’t forget about the treasure trove of past studies and experiments! By reviewing what scientists and researchers have done before, you can learn how they identified independent variables in their work.
This can give you ideas and help you recognize independent variables in your own explorations!
Ready for some practice? Let’s put on our thinking caps and try to identify the independent variables in a few scenarios.
Remember, the independent variable is what’s being changed or manipulated to observe the effect on something else! (You can see the answers below)
You’re cooking pasta for dinner and want to find out how the cooking time affects its texture. What is the independent variable?
You decide to try different exercise routines each week to see which one makes you feel the most energetic. What is the independent variable?
You’re growing tomatoes in your garden and decide to use different types of fertilizer to see which one helps them grow the best. What is the independent variable?
You’re preparing for an important test and try studying in different environments (quiet room, coffee shop, library) to see where you concentrate best. What is the independent variable?
You’re curious to see how the number of hours you sleep each night affects your mood the next day. What is the independent variable?
By practicing identifying independent variables in different scenarios, you’re becoming a true independent variable detective. Keep practicing, stay curious, and you’ll soon be spotting independent variables everywhere you go.
Independent Variable: The cooking time is the independent variable. You are changing the cooking time to observe its effect on the texture of the pasta.
Independent Variable: The type of exercise routine is the independent variable. You are trying out different exercise routines each week to see which one makes you feel the most energetic.
Independent Variable: The type of fertilizer is the independent variable. You are using different types of fertilizer to observe their effects on the growth of the tomatoes.
Independent Variable: The study environment is the independent variable. You are studying in different environments to see where you concentrate best.
Independent Variable: The number of hours you sleep is the independent variable. You are changing your sleep duration to see how it affects your mood the next day.
Whew, what a journey we’ve had exploring the world of independent variables! From understanding their definition and role to diving into a myriad of examples and real-world impacts, we’ve uncovered the treasures hidden in the realm of independent variables.
The beauty of independent variables lies in their ability to unlock new knowledge and insights, guiding us to discoveries that improve our lives and the world around us.
By identifying and studying these variables, we embark on exciting learning adventures, solving mysteries and answering questions about the universe we live in.
Remember, the joy of discovery doesn’t end here. The world is brimming with questions waiting to be answered and mysteries waiting to be solved.
Keep your curiosity alive, continue exploring, and who knows what incredible discoveries lie ahead.
Reference this article:
PracticalPie.com is a participant in the Amazon Associates Program. As an Amazon Associate we earn from qualifying purchases.
Follow Us On:
Youtube Facebook Instagram X/Twitter
Developmental
Personality
Relationships
Psychologists
Serial Killers
Personality Quiz
Memory Test
Depression test
Type A/B Personality Test
© PracticalPsychology. All rights reserved
Privacy Policy | Terms of Use
The independent variable is the variable that is controlled or changed in a scientific experiment to test its effect on the dependent variable . It doesn’t depend on another variable and isn’t changed by any factors an experimenter is trying to measure. The independent variable is denoted by the letter x in an experiment or graph.
Two classic examples of independent variables are age and time. They may be measured, but not controlled. In experiments, even if measured time isn’t the variable, it may relate to duration or intensity.
For example, a scientist is testing the effect of light and dark on the behavior of moths by turning a light on and off. The independent variable is the amount of light and the moth’s reaction is the dependent variable.
For another example, say you are measuring whether amount of sleep affects test scores. The hours of sleep would be the independent variable while the test scores would be dependent variable.
A change in the independent variable directly causes a change in the dependent variable. If you have a hypothesis written such that you’re looking at whether x affects y , the x is always the independent variable and the y is the dependent variable.
If the dependent and independent variables are plotted on a graph, the x-axis would be the independent variable and the y-axis would be the dependent variable. You can remember this using the DRY MIX acronym, where DRY means dependent or responsive variable is on the y-axis, while MIX means the manipulated or independent variable is on the x-axis.
Statistics By Jim
Making statistics intuitive
By Jim Frost 15 Comments
In this post, learn the definitions of independent and dependent variables, how to identify each type, how they differ between different types of studies, and see examples of them in use.
Independent variables (IVs) are the ones that you include in the model to explain or predict changes in the dependent variable. The name helps you understand their role in statistical analysis. These variables are independent . In this context, independent indicates that they stand alone and other variables in the model do not influence them. The researchers are not seeking to understand what causes the independent variables to change.
Independent variables are also known as predictors, factors , treatment variables, explanatory variables, input variables, x-variables, and right-hand variables—because they appear on the right side of the equals sign in a regression equation. In notation, statisticians commonly denote them using Xs. On graphs, analysts place independent variables on the horizontal, or X, axis.
In machine learning, independent variables are known as features.
For example, in a plant growth study, the independent variables might be soil moisture (continuous) and type of fertilizer (categorical).
Statistical models will estimate effect sizes for the independent variables.
Relate post : Effect Sizes in Statistics
The nature of independent variables changes based on the type of experiment or study:
Controlled experiments : Researchers systematically control and set the values of the independent variables. In randomized experiments, relationships between independent and dependent variables tend to be causal. The independent variables cause changes in the dependent variable.
Observational studies : Researchers do not set the values of the explanatory variables but instead observe them in their natural environment. When the independent and dependent variables are correlated, those relationships might not be causal.
When you include one independent variable in a regression model, you are performing simple regression. For more than one independent variable, it is multiple regression. Despite the different names, it’s really the same analysis with the same interpretations and assumptions.
Determining which IVs to include in a statistical model is known as model specification. That process involves in-depth research and many subject-area, theoretical, and statistical considerations. At its most basic level, you’ll want to include the predictors you are specifically assessing in your study and confounding variables that will bias your results if you don’t add them—particularly for observational studies.
For more information about choosing independent variables, read my post about Specifying the Correct Regression Model .
Related posts : Randomized Experiments , Observational Studies , Covariates , and Confounding Variables
The dependent variable (DV) is what you want to use the model to explain or predict. The values of this variable depend on other variables. It is the outcome that you’re studying. It’s also known as the response variable, outcome variable, and left-hand variable. Statisticians commonly denote them using a Y. Traditionally, graphs place dependent variables on the vertical, or Y, axis.
For example, in the plant growth study example, a measure of plant growth is the dependent variable. That is the outcome of the experiment, and we want to determine what affects it.
If you’re reading a study’s write-up, how do you distinguish independent variables from dependent variables? Here are some tips!
How statisticians discuss independent variables changes depending on the field of study and type of experiment.
In randomized experiments, look for the following descriptions to identify the independent variables:
In observational studies, independent variables are a bit different. While the researchers likely want to establish causation, that’s harder to do with this type of study, so they often won’t use the word “cause.” They also don’t set the values of the predictors. Some independent variables are the experiment’s focus, while others help keep the experimental results valid.
Here’s how to recognize independent variables in observational studies:
Regardless of the study type, if you see an estimated effect size, it is an independent variable.
Dependent variables are the outcome. The IVs explain the variability or causes changes in the DV. Focus on the “depends” aspect. The value of the dependent variable depends on the IVs. If Y depends on X, then Y is the dependent variable. This aspect applies to both randomized experiments and observational studies.
In an observational study about the effects of smoking, the researchers observe the subjects’ smoking status (smoker/non-smoker) and their lung cancer rates. It’s an observational study because they cannot randomly assign subjects to either the smoking or non-smoking group. In this study, the researchers want to know whether lung cancer rates depend on smoking status. Therefore, the lung cancer rate is the dependent variable.
In a randomized COVID-19 vaccine experiment , the researchers randomly assign subjects to the treatment or control group. They want to determine whether COVID-19 infection rates depend on vaccination status. Hence, the infection rate is the DV.
Note that a variable can be an independent variable in one study but a dependent variable in another. It depends on the context.
For example, one study might assess how the amount of exercise (IV) affects health (DV). However, another study might study the factors (IVs) that influence how much someone exercises (DV). The amount of exercise is an independent variable in one study but a dependent variable in the other!
Regression analysis and ANOVA mathematically describe the relationships between each independent variable and the dependent variable. Typically, you want to determine how changes in one or more predictors associate with changes in the dependent variable. These analyses estimate an effect size for each independent variable.
Suppose researchers study the relationship between wattage, several types of filaments, and the output from a light bulb. In this study, light output is the dependent variable because it depends on the other two variables. Wattage (continuous) and filament type (categorical) are the independent variables.
After performing the regression analysis, the researchers will understand the nature of the relationship between these variables. How much does the light output increase on average for each additional watt? Does the mean light output differ by filament types? They will also learn whether these effects are statistically significant.
Related post : When to Use Regression Analysis
As I mentioned earlier, graphs traditionally display the independent variables on the horizontal X-axis and the dependent variable on the vertical Y-axis. The type of graph depends on the nature of the variables. Here are a couple of examples.
Suppose you experiment to determine whether various teaching methods affect learning outcomes. Teaching method is a categorical predictor that defines the experimental groups. To display this type of data, you can use a boxplot, as shown below.
The groups are along the horizontal axis, while the dependent variable, learning outcomes, is on the vertical. From the graph, method 4 has the best results. A one-way ANOVA will tell you whether these results are statistically significant. Learn more about interpreting boxplots .
Now, imagine that you are studying people’s height and weight. Specifically, do height increases cause weight to increase? Consequently, height is the independent variable on the horizontal axis, and weight is the dependent variable on the vertical axis. You can use a scatterplot to display this type of data.
It appears that as height increases, weight tends to increase. Regression analysis will tell you if these results are statistically significant. Learn more about interpreting scatterplots .
April 2, 2024 at 2:05 am
Hi again Jim
Thanks so much for taking an interest in New Zealand’s Equity Index.
Rather than me trying to explain what our Ministry of Education has done, here is a link to a fairly short paper. Scroll down to page 4 of this (if you have the inclination) – https://fyi.org.nz/request/21253/response/80708/attach/4/1301098%20Response%20and%20Appendix.pdf
The Equity Index is used to allocate only 4% of total school funding. The most advantaged 5% of schools get no “equity funding” and the other 95% get a share of the equity funding pool based on their index score. We are talking a maximum of around $1,000NZD per child per year for the most disadvantaged schools. The average amount is around $200-$300 per child per year.
My concern is that I thought the dependent variable is the thing you want to explain or predict using one or more independent variables. Choosing the form of dependent variable that gets a good fit seems to be answering the question “what can we predict well?” rather than “how do we best predict the factor of interest?” The factor is educational achievement and I think this should have been decided upon using theory rather than experimentation with the data.
As it turns out, the Ministry has chosen a measure of educational achievement that puts a heavy weight on achieving an “excellence” rating on a qualification and a much lower weight on simply gaining a qualification. My reading is that they have taken what our universities do when looking at which students to admit.
It doesn’t seem likely to me that a heavy weighting on excellent achievement is appropriate for targeting extra funding to schools with a lot of under-achieving students.
However, my stats knowledge isn’t extensive and it’s definitely rusty, so your thoughts are most helpful.
Regards Kathy Spencer
April 1, 2024 at 4:08 pm
Hi Jim, Great website, thank you.
I have been looking at New Zealand’s Equity Index which is used to allocate a small amount of extra funding to schools attended by children from disadvantaged backgrounds. The Index uses 37 socioeconomic measures relating to a child’s and their parents’ backgrounds that are found to be associated with educational achievement.
I was a bit surprised to read how they had decided on the dependent variable to be used as the measure of educational achievement, or dependent variable. Part of the process was as follows- “Each measure was tested to see the degree to which it could be predicted by the socioeconomic factors selected for the Equity Index.”
Any comment?
Many thanks Kathy Spencer
April 1, 2024 at 9:20 pm
That’s a very complex study and I don’t know much about it. So, that limits what I can say about it. But I’ll give you a few thoughts that come to mind.
This method is common in educational and social research, particularly when the goal is to understand or mitigate the impact of socioeconomic disparities on educational outcomes.
There are the usual concerns about not confusing correlation with causation. However, because this program seems to quantify barriers and then provide extra funding based on the index, I don’t think that’s a problem. They’re not attempting to adjust the socioeconomic measures so no worries about whether they’re directly causal or not.
I might have a small concern about cherry picking the model that happens to maximize the R-squared. Chasing the R-squared rather than having theory drive model selecting is often problematic. Chasing the best fit increases the likelihood that the model fits this specific dataset best by random chance rather than being truly the best. If so, it won’t perform as well outside the dataset used to fit the model. Hopefully, they validated the predicted ability of the model using other data.
However, I’m not sure if the extra funding is determined by the model? I don’t know if the index value is calculated separately outside the candidate models and then fed into the various models. Or does the choice of model affect how the index value is calculated? If it’s the former, then the funding doesn’t depend on a potentially cherry picked model. If the latter, it does.
So, I’m not really clear on the purpose of the model. I’m guessing they just want to validate their Equity Index. And maximizing the R-squared doesn’t really say it’s the best Index but it does at least show that it likely has some merit. I’d be curious how the took the 37 measures and combined them to one index. So, I have more questions than answers. I don’t mean that in a critical sense. Just that I know almost nothing about this program.
I’m curious, what was the outcome they picked? How high was the R-squared? And what were your concerns?
February 6, 2024 at 6:57 pm
Excellent explanation, thank you.
February 5, 2024 at 5:04 pm
Thank you for this insightful blog. Is it valid to use a dependent variable delivered from the mean of independent variables in multiple regression if you want to evaluate the influence of each unique independent variable on the dependent variables?
February 5, 2024 at 11:11 pm
It’s difficult to answer your question because I’m not sure what you mean that the DV is “delivered from the mean of IVs.” If you mean that multiple IVs explain changes in the DV’s mean, yes, that’s the standard use for multiple regression.
If you mean something else, please explain in further detail. Thanks!
February 6, 2024 at 6:32 am
What I meant is; the DV values used as parameters for multiple regression is basically calculated as the average of the IVs. For instance:
From 3 IVs (X1, X2, X3), Y is delivered as :
Y = (Sum of all IVs) / (3)
Then the resulting Y is used as the DV along with the initial IVs to compute the multiple regression.
February 6, 2024 at 2:17 pm
There are a couple of reasons why you shouldn’t do that.
For starters, Y-hat (the predicted value of the regression equation) is the mean of the DV given specific values of the IV. However, that mean is calculated by using the regression coefficients and constant in the regression equation. You don’t calculate the DV mean as the sum of the IVs divided by the number of IVs. Perhaps given a very specific subject-area context, using this approach might seem to make sense but there are other problems.
A critical problem is that the Y is now calculated using the IVs. Instead, the DVs should be measured outcomes and not calculated from IVs. This violates regression assumptions and produces questionable results.
Additionally, it complicates the interpretation. Because the DV is calculated from the IV, you know the regression analysis will find a relationship between them. But you have no idea if that relationship exists in the real world. This complication occurs because your results are based on forcing the DV to equal a function of the IVs and do not reflect real-world outcomes.
In short, DVs should be real-world outcomes that you measure! And be sure to keep your IVs and DV independent. Let the regression analysis estimate the regression equation from your data that contains measured DVs. Don’t use a function to force the DV to equal some function of the IVs because that’s the opposite direction of how regression works!
I hope that helps!
September 6, 2022 at 7:43 pm
Thank you for sharing.
March 3, 2022 at 1:59 am
Excellent explanation.
February 13, 2022 at 12:31 pm
Thanks a lot for creating this excellent blog. This is my go-to resource for Statistics.
I had been pondering over a question for sometime, it would be great if you could shed some light on this.
In linear and non-linear regression, should the distribution of independent and dependent variables be unskewed? When is there a need to transform the data (say, Box-Cox transformation), and do we transform the independent variables as well?
October 28, 2021 at 12:55 pm
If I use a independent variable (X) and it displays a low p-value <.05, why is it if I introduce another independent variable to regression the coefficient and p-value of Y that I used in first regression changes to look insignificant? The second variable that I introduced has a low p-value in regression.
October 29, 2021 at 11:22 pm
Keep in mind that the significance of each IV is calculated after accounting for the variance of all the other variables in the model, assuming you’re using the standard adjusted sums of squares rather than sequential sums of squares. The sums of squares (SS) is a measure of how much dependent variable variability that each IV accounts for. In the illustration below, I’ll assume you’re using the standard of adjusted SS.
So, let’s say that originally you have X1 in the model along with some other IVs. Your model estimates the significance of X1 after assessing the variability that the other IVs account for and finds that X1 is significant. Now, you add X2 to the model in addition to X1 and the other IVs. Now, when assessing X1, the model accounts for the variability of the IVs including the newly added X2. And apparently X2 explains a good portion of the variability. X1 is no longer able to account for that variability, which causes it to not be statistically significant.
In other words, X2 explains some of the variability that X1 previously explained. Because X1 no longer explains it, it is no longer significant.
Additionally, the significance of IVs is more likely to change when you add or remove IVs that are correlated. Correlated IVs is known as multicollinearity. Multicollinearity can be a problem when you have too much. Given the change in significance, I’d check your model for multicollinearity just to be safe! Click the link to read a post that wrote about that!
September 6, 2021 at 8:35 am
nice explanation
August 25, 2021 at 3:09 am
it is excellent explanation
What Are Independent and Dependent Variables?
Both the independent variable and dependent variable are examined in an experiment using the scientific method , so it's important to know what they are and how to use them.
In a scientific experiment, you'll ultimately be changing or controlling the independent variable and measuring the effect on the dependent variable. This distinction is critical in evaluating and proving hypotheses.
Below you'll find more about these two types of variables, along with examples of each in sample science experiments, and an explanation of how to graph them to help visualize your data.
An independent variable is the condition that you change in an experiment. In other words, it is the variable you control. It is called independent because its value does not depend on and is not affected by the state of any other variable in the experiment. Sometimes you may hear this variable called the "controlled variable" because it is the one that is changed. Do not confuse it with a control variable , which is a variable that is purposely held constant so that it can't affect the outcome of the experiment.
The dependent variable is the condition that you measure in an experiment. You are assessing how it responds to a change in the independent variable, so you can think of it as depending on the independent variable. Sometimes the dependent variable is called the "responding variable."
If you are having a hard time identifying which variable is the independent variable and which is the dependent variable, remember the dependent variable is the one affected by a change in the independent variable. If you write out the variables in a sentence that shows cause and effect, the independent variable causes the effect on the dependent variable. If you have the variables in the wrong order, the sentence won't make sense.
Independent variable causes an effect on the dependent variable.
Example : How long you sleep (independent variable) affects your test score (dependent variable).
This makes sense, but:
Example : Your test score affects how long you sleep.
This doesn't really make sense (unless you can't sleep because you are worried you failed a test, but that would be a different experiment).
There is a standard method for graphing independent and dependent variables. The x-axis is the independent variable, while the y-axis is the dependent variable. You can use the DRY MIX acronym to help remember how to graph variables:
D = dependent variable R = responding variable Y = graph on the vertical or y-axis
M = manipulated variable I = independent variable X = graph on the horizontal or x-axis
Test your understanding with the scientific method quiz .
Independent and dependent variables in research, can qualitative data have independent and dependent variables.
Experiments rely on capturing the relationship between independent and dependent variables to understand causal patterns. Researchers can observe what happens when they change a condition in their experiment or if there is any effect at all.
It's important to understand the difference between the independent variable and dependent variable. We'll look at the notion of independent and dependent variables in this article. If you are conducting experimental research, defining the variables in your study is essential for realizing rigorous research .
In experimental research, a variable refers to the phenomenon, person, or thing that is being measured and observed by the researcher. A researcher conducts a study to see how one variable affects another and make assertions about the relationship between different variables.
A typical research question in an experimental study addresses a hypothesized relationship between the independent variable manipulated by the researcher and the dependent variable that is the outcome of interest presumably influenced by the researcher's manipulation.
Take a simple experiment on plants as an example. Suppose you have a control group of plants on one side of a garden and an experimental group of plants on the other side. All things such as sunlight, water, and fertilizer being equal, both plants should be expected to grow at the same rate.
Now imagine that the plants in the experimental group are given a new plant fertilizer under the assumption that they will grow faster. Then you will need to measure the difference in growth between the two groups in your study.
In this case, the independent variable is the type of fertilizer used on your plants while the dependent variable is the rate of growth among your plants. If there is a significant difference in growth between the two groups, then your study provides support to suggest that the fertilizer causes higher rates of plant growth.
The independent variable is the element in your study that you intentionally change, which is why it can also be referred to as the manipulated variable.
You manipulate this variable to see how it might affect the other variables you observe, all other factors being equal. This means that you can observe the cause and effect relationships between one independent variable and one or multiple dependent variables.
Independent variables are directly manipulated by the researcher, while dependent variables are not. They are "dependent" because they are affected by the independent variable in the experiment. Researchers can thus study how manipulating the independent variable leads to changes in the main outcome of interest being measured as the dependent variable.
Note that while you can have multiple dependent variables, it is challenging to establish research rigor for multiple independent variables. If you are making so many changes in an experiment, how do you know which change is responsible for the outcome produced by the study? Studying more than one independent variable would require running an experiment for each independent variable to isolate its effects on the dependent variable.
This being said, it is certainly possible to employ a study design that involves multiple independent and dependent variables, as is the case with what is called a factorial experiment. For example, a psychological study examining the effects of sleep and stress levels on work productivity and social interaction would have two independent variables and two dependent variables, respectively.
Such a study would be complex and require careful planning to establish the necessary research rigor , however. If possible, consider narrowing your research to the examination of one independent variable to make it more manageable and easier to understand.
Let's consider an experiment in the social studies. Suppose you want to determine the effectiveness of a new textbook compared to current textbooks in a particular school.
The new textbook is supposed to be better, but how can you prove it? Besides all the selling points that the textbook publisher makes, how do you know if the new textbook is any good? A rigorous study examining the effects of the textbook on classroom outcomes is in order.
The textbook given to students makes up the independent variable in your experimental study. The shift from the existing textbooks to the new one represents the manipulation of the independent variable in this study.
In any experiment, the dependent variable is observed to measure how it is affected by changes to the independent variable. Outcomes such as test scores and other performance metrics can make up the data for the dependent variable.
Now that we are changing the textbook in the experiment above, we should examine if there are any effects.
To do this, we will need two classrooms of students. As best as possible, the two sets of students should be of similar proficiency (or at least of similar backgrounds) and placed within similar conditions for teaching and learning (e.g., physical space, lesson planning).
The control group in our study will be one set of students using the existing textbook. By examining their performance, we can establish a baseline. The performance of the experimental group, which is the set of students using the new textbook, can then be compared with the baseline performance.
As a result, the change in the test scores make up the data for our dependent variable. We cannot directly affect how well students perform on the test, but we can conclude from our experiment whether the use of the new textbook might impact students' performance.
Rely on our powerful data analysis interface for your research, starting with a free trial.
We can typically think of an independent variable as something a researcher can directly change. In the above example, we can change the textbook used by the teacher in class. If we're talking about plants, we can change the fertilizer.
Conversely, the dependent variable is something that we do not directly influence or manipulate. Strictly speaking, we cannot directly manipulate a student's performance on a test or the rate of growth of a plant, not without other factors such as new teaching methods or new fertilizer, respectively.
Understanding the distinction between a dependent variable and an independent variable is key to experimental research. Ultimately, the distinction can be reduced to which element in a study has been directly influenced by the researcher.
Given the potential complexities encountered in research, there is essential terminology for other variables in any experimental study. You might employ this terminology or encounter them while reading other research.
A control variable is any factor that the researcher tries to keep constant as the independent variable changes. In the plant experiment described earlier in this article, the sunlight and water are each a controlled variable while the type of fertilizer used is the manipulated variable across control and experimental groups.
To ensure research rigor, the researcher needs to keep these control variables constant to dispel any concerns that differences in growth rate were being driven by sunlight or water, as opposed to the fertilizer being used.
Extraneous variables refer to any unwanted influence on the dependent variable that may confound the analysis of the study. For example, if bugs or animals ate the plants in your fertilizer study, this was greatly impact the rates of plant growth. This is why it would be important to control the environment and protect it from such threats.
Finally, independent variables can go by different names such as subject variables or predictor variables. Dependent variables can also be referred to as the responding variable or outcome variable. Whatever the language, they all serve the same role of influencing the dependent variable in an experiment.
The use of the word " variables " is typically associated with quantitative and confirmatory research. Naturalistic qualitative research typically does not employ experimental designs or establish causality. Qualitative research often draws on observations , interviews , focus groups , and other forms of data collection that are allow researchers to study the naturally occurring "messiness" of the social world, rather than controlling all variables to isolate a cause-and-effect relationship.
In limited circumstances, the idea of experimental variables can apply to participant observations in ethnography , where the researcher should be mindful of their influence on the environment they are observing.
However, the experimental paradigm is best left to quantitative studies and confirmatory research questions. Qualitative researchers in the social sciences are oftentimes more interested in observing and describing socially-constructed phenomena rather than testing hypotheses .
Nonetheless, the notion of independent and dependent variables does hold important lessons for qualitative researchers. Even if they don't employ variables in their study design, qualitative researchers often observe how one thing affects another. A theoretical or conceptual framework can then suggest potential cause-and-effect relationships in their study.
Download a free trial of ATLAS.ti to see how you can make the most of your data.
Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
Measurement and Units of Analysis
When one variable causes another variable, we have what researchers call independent and dependent variables. In the example where gender was found to be causally linked to cell phone addiction, gender would be the independent variable and cell phone addiction would be the dependent variable. An independent variable is one that causes another. A dependent variable is one that is caused by the other. Dependent variables depend on independent variables. If you are struggling to figure out which is the dependent and which is the independent variable, there is a little trick, as follows:
Ask yourself the following question: Is X dependent upon Y. Now substitute words for X and Y. For example, is the level of success in an online class dependent upon the time spent online? Success in an online class is the dependent variable, because it is dependent upon something. In this case, we are asking if the level of success in an online class is dependent upon the time spent online. Time spent online is the independent variable. Table 4.2 provides you with an opportunity to practice identifying the dependent and the independent variable
Q.1 Is success in an online class dependent upon gender? | ||
Q.2 Is the prevalence of post-traumatic stress disorder in Canada dependent upon the level of funding for early intervention? | ||
Q.3 Is the reporting of incidents of high school bullying dependent upon anti-bullying programs in high school? | ||
Q.4 Is the survival rate of female heart attack victims correlated to hospitality emergency room procedures? |
While it is very common to hear the terms independent and dependent variable, extraneous variables are less common, which is surprising because an extraneous variable can destroy the integrity of a research study that claims to show a cause and effect relationship. An extraneous variable is a variable that may compete with the independent variable in explaining the outcome. Remember this, if you are ever interested in identifying cause and effect relationships you must always determine whether there are any extraneous variables you need to worry about. If an extraneous variable really is the reason for an outcome (rather than the IV) then we sometimes like to call it a confounding variable because it has confused or confounded the relationship we are interested in (see example below).
Suppose we want to determine the effectiveness of new course curriculum for an online research methods class. We want to test how effective the new course curriculum is on student learning, compared to the old course curriculum. We are unable to use random assignment to equate our groups. Instead, we ask one of the college´s most experienced online teachers to use the new online curriculum with one class of online students and the old curriculum with the other class of online students. Imagine that the students taking the new curriculum course (the experimental group) got higher grades than the control group (the old curriculum). Do you see any problems with claiming that the reason for the difference between the two groups is because of the new curriculum? The problem is that there are alternative explanations.
First, perhaps the difference is because the group of students in the new curriculum course were more experienced students, both in terms of age and where they were in their studies (more third year students than first year students). Perhaps the old curriculum class had a higher percentage of students for whom English is not their first language and they struggled with some of the material because of language barriers, which had nothing to do with then old curriculum. In other words, we have a problem, in that there could be alternative explanations for our findings. These alternative explanations are called extraneous variables and they can occur when we do not have random assignation. Indeed, it is very possible that the difference we saw between the two groups was due to other variables (i.e. experience level of students, English language proficiency), rather than the IV (new versus old curriculum).
It is important to note that researchers can and should attempt to control for extraneous variables, as much as possible. This can be done in two ways. The first is by employing standardized procedures . This means that the researcher attempts to ensure that all aspects of the experiment are the same, with the exception of the independent variable. For example, the researchers would use the same method for recruiting participants and they would conduct the experiment in the same setting. They would ensure that they give the same explanation to the participants at the beginning of the study and any feedback at the end of the study in exactly the same way. Any rewards for participation would be offered for all participants in the same manner. They could also ensure that the experiment occurs on the same day of the week (or month), or at the same time of day, and that the lab is kept at a constant temperature, a constant level of brightness, and a constant level of noise (Explore Psychology, 2019).
The second way that a researcher in an experiment can control for extraneous variables is to employ random assignation to reduce the likelihood that characteristics specific to some of the participants have influenced the independent variable. Random assignment means that every person chosen for an experiment has an equal chance of being assigned to either the test group of the control group (Explore Psychology, 2019). Chapter 6 provides more detail on random assignment, and explains the difference between a test group and a control group.
An Introduction to Research Methods in Sociology Copyright © 2019 by Valerie A. Sheppard is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.
Instant insights, infinite possibilities
Last updated
14 February 2023
Reviewed by
Short on time? Get an AI generated summary of this article instead
Independent variables are features or values fixed within the population or study under investigation. An example might be a subject's age within a study - other variables, such as what they eat, how long they sleep, and how much TV they watch wouldn't change the subject's age.
On the other hand, a dependent variable can be influenced by other factors or variables. For example, how well you perform on a series of tests (a dependent variable) could be influenced by how long you study or how much sleep you get before the night of the exam.
A better understanding of independent variables, specifically the types, how they function in research contexts, and how to distinguish them from dependent variables, will assist you in determining how to identify them in your studies.
Dovetail streamlines research to help you uncover and share actionable insights
Independent variables can be of several types, depending on the hypothesis and research. However, the most common types are experimental independent variables and subject variables.
Experimental variables are those that can be directly manipulated in a study. In other words, these are independent variables that you can manipulate to discover how they influence your dependent variables.
For example, you may have two study groups split by independent variables: one receiving a new drug treatment and one receiving a placebo. These types of studies generally require the random assignment of research participants to different groups to observe how results vary based on the influence of different independent variables.
A proper experiment requires you to randomly assign different levels of an independent variable to your participants.
Random assignment helps you control participant characteristics, so they don't affect your experimental results. This helps you to have confidence that your dependent variable results come solely from the experimental independent variable manipulation.
Subject variables are independent variables that can't be changed in a study but can be used to categorize study participants. They are mostly features that differ between study subjects. For instance, as a social researcher, you can use gender identification, race, education level, or income as key independent variables to classify your research subjects.
Unlike experimental variables, subject variables necessitate a quasi-experimental approach because there is no random assignment. This type of independent variable comprises features and attributes inherent within study participants; therefore, they cannot be assigned randomly.
Instead, you can develop a research approach in which you evaluate the findings of different groups of participants based on their features. It is important to note that any research design that uses non-random assignment is vulnerable to study biases such as sampling and selection bias.
As noted previously, independent variables are critical in developing a study design. This is because they assist researchers in determining cause-and-effect relationships. Controlled experiments require minimal to no outside influence to make conclusions.
Identifying independent variables is one way to eliminate external influences and achieve greater certainty that research results are representative. By controlling for outside influences as much as possible, you can make meaningful inferences about the link between independent and dependent variables.
In most cases, changes in the independent variables cause changes in the dependent variables. For example, if you change an independent variable such as age, you might expect a dependent variable such as cognitive function or running speed to change if the age difference is large. However, there are situations when variations in the independent variables do not influence the dependent variable.
Choosing independent variables within your research will be driven by the objectives of your study. Start by formulating a hypothesis about the outcome you anticipate, and then choose independent variables that you believe will significantly influence the dependent variables.
Make sure you have experimental and control groups that have identical features. They should only differ based on the treatment they get for the independent variable. In this case, your control group will undergo no treatment or changes in the independent variable, versus the experimental group, which will receive the treatment or a wide variation of the independent variable.
The type of study or experiment greatly impacts the nature of an independent variable. If you are doing an experiment involving a control condition or group, you will need to monitor and define the values of the independent variables you are using within test condition groups.
In an observational experiment, the explanatory variables' values are not predetermined, but instead are observed in their natural surroundings.
Model specification is the process of deciding which independent variables to incorporate into a statistical model. It involves extensive study, numerous specific topics, and statistical aspects.
Including one independent variable in a regression model entails performing a simple regression, while for more than one independent variable, it is a multiple regression. The names might be different, but the analysis, interpretation, and assumptions are all the same.
To better understand the concept of independent variables, have a look at these few examples used in different contexts:
Mental health context: As a medical researcher, you may be interested in finding out whether a new type of treatment can reduce anxiety in people suffering from a social anxiety disorder. Your study can include three groups of patients. One group receives the new treatment, another gets a different treatment, and the last gets no treatment. The type of treatment is the independent variable.
Workplace context: In this case, you may want to know if giving employees greater control over how they perform their duties results in increased job satisfaction. Your study will involve two groups of employees, one with a lot of say over how they do their jobs and the other without. In this scenario, the independent variable is the amount of control the employees have over their job.
Educational context: You can conduct a study to see if after-school math tutoring improves student performance on standardized math tests. In this example, one group of students will attend an after-school tutoring session three times a week, whereas another group will not receive this extra help. The independent variable is the involvement in after-school math tutoring sessions.
Organization context: You may want to know if the color of an office affects work efficiency. Your research will consider a group of employees working in white or yellow rooms. The independent variable is the color of the office.
A dependent variable changes as a result of the manipulation of the independent variable. In a nutshell, it is what you test or measure in an experiment. It is also known as a response variable since it responds to changes in another variable, or known as an outcome variable because it represents the outcome you want to measure.
Statisticians also denote these as left-hand side variables because they are typically found on the left-hand side of a regression model. Typically, dependent variables are plotted on the y-axis of graphs.
For instance, in a study designed to evaluate how a certain treatment affects the symptoms of psychological disorders, the dependent variable might be identified as the severity of the symptoms a patient experiences. The treatment used would be the independent variable.
The results of an experiment are important because they can assist you in determining the extent to which changes in your independent variable cause variations in your dependent variable. They can also help forecast the degree to which your dependent variable will vary due to changes in the independent variable.
It can be challenging to differentiate between independent and dependent variables, especially when designing comprehensive research. In some circumstances, a dependent variable from one research study will be used as an independent variable in another. The key is to pay close attention to the study design.
To recognize independent variables in research, focus on determining whether the variable causes variation in another variable. Independent variables are also manipulated variables whose values are determined by the researchers. In certain experiments, notably in medicine, they are described as risk factors; whereas in others, they are referred to as experimental factors.
Keep in mind that control groups and treatments are often independent variables. And studies that use this approach tend to classify independent variables as categorical grouping variables that establish the experimental groups.
The approaches used to identify independent variables in observational research differ slightly. In these studies, independent variables explain, predict, or correlate with variation in the dependent variable. The study results are also changed or regulated by a variable. If you see an estimated impact size, it is an independent variable, irrespective of the type of study you are reading or designing.
To identify dependent variables, you must first determine if the variable is measurable within the research. Also, determine whether the variable relies on another variable in the experiment. If you discover that a variable is only subject to change or variability after other variables have been changed, it may be a dependent variable.
Both independent and dependent variables are mainly used in quasi-experimental and experimental studies. When conducting research, you can generate descriptive statistics to illustrate results. Following that, you would choose a suitable statistical test to validate your hypothesis.
The kind of variable, measurement level, and several independent variable levels will significantly influence your chosen test. Many studies use either the ANOVA or the t-test for data analysis and to obtain answers to research questions .
Other variables, in addition to independent and dependent variables, may have a major impact on a research outcome. Thus, it is vital to identify and take control of extraneous variables since they can cause variation in the relationship between the independent and dependent variables.
Some examples of extraneous variables include demand characteristics and experimenter effects. When these variables cannot be controlled in an experiment, they are usually called confounding variables .
You can use either a chart or a graph to visualize quantitative research results. Graphs have a typical display in which the independent variables lie on the horizontal x-axis and the dependent variables on the vertical y-axis. The presentation of data will depend on the nature of the variables in your research questions.
Having a working knowledge of independent and dependent variables is key to understanding how research projects work. There are various ways to think of independent variables. However, the best approach is to picture the independent variable as what you change and the dependent variable as what is influenced due to the variation.
In other words, consider the independent variable the cause and the dependent variable the effect. When visualizing these variables in a graph, place the independent variable on the x-axis and the dependent variable on the y-axis.
It is also essential to remember that there are other variables aside from the independent and dependent variables that might impact the outcome of an experiment. As a result, you should identify and control extraneous variables as much as possible to make a valid conclusion about the study findings.
An independent variable in research or an experiment is what the researcher manipulates or changes. The dependent variable, on the other hand, is what is measured. In general, the independent variable is in charge of influencing the dependent variable.
In research or an experiment, a variable refers to something that can be tested. You can use independent and dependent variables to design research .
No, because a dependent variable is reliant on the independent variable. Thus, a variable in a study can only be the cause (independent) or the effect (dependent). However, there are also cases in which a dependent variable from one study is used as an independent variable in another.
Yes, however, a study must include various research questions for multiple independent and dependent variables to be effective.
Do you want to discover previous research faster?
Do you share your research findings with others?
Do you analyze research data?
Start for free today, add your research, and get to key insights faster
Last updated: 18 April 2023
Last updated: 27 February 2023
Last updated: 22 August 2024
Last updated: 5 February 2023
Last updated: 16 August 2024
Last updated: 9 March 2023
Last updated: 30 April 2024
Last updated: 12 December 2023
Last updated: 11 March 2024
Last updated: 4 July 2024
Last updated: 6 March 2024
Last updated: 5 March 2024
Last updated: 13 May 2024
Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next, log in or sign up.
Get started for free
What are independent and dependent variables.
You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .
In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:
Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .
Quantitative observations involve measuring or counting something and expressing the result in numerical form, while qualitative observations involve describing something in non-numerical terms, such as its appearance, texture, or color.
To make quantitative observations , you need to use instruments that are capable of measuring the quantity you want to observe. For example, you might use a ruler to measure the length of an object or a thermometer to measure its temperature.
Scope of research is determined at the beginning of your research process , prior to the data collection stage. Sometimes called “scope of study,” your scope delineates what will and will not be covered in your project. It helps you focus your work and your time, ensuring that you’ll be able to achieve your goals and outcomes.
Defining a scope can be very useful in any research project, from a research proposal to a thesis or dissertation . A scope is needed for all types of research: quantitative , qualitative , and mixed methods .
To define your scope of research, consider the following:
Inclusion and exclusion criteria are predominantly used in non-probability sampling . In purposive sampling and snowball sampling , restrictions apply as to who can be included in the sample .
Inclusion and exclusion criteria are typically presented and discussed in the methodology section of your thesis or dissertation .
The purpose of theory-testing mode is to find evidence in order to disprove, refine, or support a theory. As such, generalisability is not the aim of theory-testing mode.
Due to this, the priority of researchers in theory-testing mode is to eliminate alternative causes for relationships between variables . In other words, they prioritise internal validity over external validity , including ecological validity .
Convergent validity shows how much a measure of one construct aligns with other measures of the same or related constructs .
On the other hand, concurrent validity is about how a measure matches up to some known criterion or gold standard, which can be another measure.
Although both types of validity are established by calculating the association or correlation between a test score and another variable , they represent distinct validation methods.
Validity tells you how accurately a method measures what it was designed to measure. There are 4 main types of validity :
Criterion validity evaluates how well a test measures the outcome it was designed to measure. An outcome can be, for example, the onset of a disease.
Criterion validity consists of two subtypes depending on the time at which the two measures (the criterion and your test) are obtained:
Attrition refers to participants leaving a study. It always happens to some extent – for example, in randomised control trials for medical research.
Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group . As a result, the characteristics of the participants who drop out differ from the characteristics of those who stay in the study. Because of this, study results may be biased .
Criterion validity and construct validity are both types of measurement validity . In other words, they both show you how accurately a method measures something.
While construct validity is the degree to which a test or other measurement method measures what it claims to measure, criterion validity is the degree to which a test can predictively (in the future) or concurrently (in the present) measure something.
Construct validity is often considered the overarching type of measurement validity . You need to have face validity , content validity , and criterion validity in order to achieve construct validity.
Convergent validity and discriminant validity are both subtypes of construct validity . Together, they help you evaluate whether a test measures the concept it was designed to measure.
You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.
Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.
When a test has strong face validity, anyone would agree that the test’s questions appear to measure what they are intended to measure.
For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).
On the other hand, content validity evaluates how well a test represents all the aspects of a topic. Assessing content validity is more systematic and relies on expert evaluation. of each question, analysing whether each one covers the aspects that the test was designed to cover.
A 4th grade math test would have high content validity if it covered all the skills taught in that grade. Experts(in this case, math teachers), would have to evaluate the content validity by comparing the test to the learning objectives.
Content validity shows you how accurately a test or other measurement method taps into the various aspects of the specific construct you are researching.
In other words, it helps you answer the question: “does the test measure all aspects of the construct I want to measure?” If it does, then the test has high content validity.
The higher the content validity, the more accurate the measurement of the construct.
If the test fails to include parts of the construct, or irrelevant parts are included, the validity of the instrument is threatened, which brings your results into question.
Construct validity refers to how well a test measures the concept (or construct) it was designed to measure. Assessing construct validity is especially important when you’re researching concepts that can’t be quantified and/or are intangible, like introversion. To ensure construct validity your test should be based on known indicators of introversion ( operationalisation ).
On the other hand, content validity assesses how well the test represents all aspects of the construct. If some aspects are missing or irrelevant parts are included, the test has low content validity.
Construct validity has convergent and discriminant subtypes. They assist determine if a test measures the intended notion.
The reproducibility and replicability of a study can be ensured by writing a transparent, detailed method section and using clear, unambiguous language.
Reproducibility and replicability are related terms.
Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.
Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.
Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .
This means that you cannot use inferential statistics and make generalisations – often the goal of quantitative research . As such, a snowball sample is not representative of the target population, and is usually a better fit for qualitative research .
Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones.
Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .
Snowball sampling is best used in the following cases:
Stratified sampling and quota sampling both involve dividing the population into subgroups and selecting units from each subgroup. The purpose in both cases is to select a representative sample and/or to allow comparisons between subgroups.
The main difference is that in stratified sampling, you draw a random sample from each subgroup ( probability sampling ). In quota sampling you select a predetermined number or proportion of units, in a non-random manner ( non-probability sampling ).
Random sampling or probability sampling is based on random selection. This means that each unit has an equal chance (i.e., equal probability) of being included in the sample.
On the other hand, convenience sampling involves stopping people at random, which means that not everyone has an equal chance of being selected depending on the place, time, or day you are collecting your data.
Convenience sampling and quota sampling are both non-probability sampling methods. They both use non-random criteria like availability, geographical proximity, or expert knowledge to recruit study participants.
However, in convenience sampling, you continue to sample units or cases until you reach the required sample size.
In quota sampling, you first need to divide your population of interest into subgroups (strata) and estimate their proportions (quota) in the population. Then you can start your data collection , using convenience sampling to recruit participants, until the proportions in each subgroup coincide with the estimated proportions in the population.
A sampling frame is a list of every member in the entire population . It is important that the sampling frame is as complete as possible, so that your sample accurately reflects your population.
Stratified and cluster sampling may look similar, but bear in mind that groups created in cluster sampling are heterogeneous , so the individual characteristics in the cluster vary. In contrast, groups created in stratified sampling are homogeneous , as units share characteristics.
Relatedly, in cluster sampling you randomly select entire groups and include all units of each group in your sample. However, in stratified sampling, you select some units of all groups and include them in your sample. In this way, both methods can ensure that your sample is representative of the target population .
When your population is large in size, geographically dispersed, or difficult to contact, it’s necessary to use a sampling method .
This allows you to gather information from a smaller part of the population, i.e. the sample, and make accurate statements by using statistical analysis. A few sampling methods include simple random sampling , convenience sampling , and snowball sampling .
The two main types of social desirability bias are:
Response bias refers to conditions or factors that take place during the process of responding to surveys, affecting the responses. One type of response bias is social desirability bias .
Demand characteristics are aspects of experiments that may give away the research objective to participants. Social desirability bias occurs when participants automatically try to respond in ways that make them seem likeable in a study, even if it means misrepresenting how they truly feel.
Participants may use demand characteristics to infer social norms or experimenter expectancies and act in socially desirable ways, so you should try to control for demand characteristics wherever possible.
A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.
Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.
Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .
These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.
Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.
Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.
These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.
Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .
You can only guarantee anonymity by not collecting any personally identifying information – for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.
You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.
Peer review is a process of evaluating submissions to an academic journal. Utilising rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication.
For this reason, academic journals are often considered among the most credible sources you can use in a research project – provided that the journal itself is trustworthy and well regarded.
In general, the peer review process follows the following steps:
Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field.
It acts as a first defence, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.
Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.
Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.
However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.
Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.
Blinding is important to reduce bias (e.g., observer bias , demand characteristics ) and ensure a study’s internal validity .
If participants know whether they are in a control or treatment group , they may adjust their behaviour in ways that affect the outcome that researchers are trying to measure. If the people administering the treatment are aware of group assignment, they may treat participants differently and thus directly or indirectly influence the final results.
Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment .
Explanatory research is a research method used to investigate how or why something occurs when only a small amount of information is available pertaining to that topic. It can help you increase your understanding of a given topic.
Explanatory research is used to investigate how or why a phenomenon occurs. Therefore, this type of research is often one of the first stages in the research process , serving as a jumping-off point for future research.
Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.
Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.
You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.
To implement random assignment , assign a unique number to every member of your study’s sample .
Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a die to randomly assign participants to groups.
Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.
In contrast, random assignment is a way of sorting the sample into control and experimental groups.
Random sampling enhances the external validity or generalisability of your results, while random assignment improves the internal validity of your study.
Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.
In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.
Clean data are valid, accurate, complete, consistent, unique, and uniform. Dirty data include inconsistencies and errors.
Dirty data can come from any part of the research process, including poor research design , inappropriate measurement materials, or flawed data entry.
Data cleaning takes place between data collection and data analyses. But you can use some methods even before collecting data.
For clean data, you should start by designing measures that collect valid data. Data validation at the time of data entry or collection helps you minimize the amount of data cleaning you’ll need to do.
After data collection, you can use data standardisation and data transformation to clean your data. You’ll also deal with any missing values, outliers, and duplicate values.
Data cleaning involves spotting and resolving potential data inconsistencies or errors to improve your data quality. An error is any value (e.g., recorded weight) that doesn’t reflect the true value (e.g., actual weight) of something that’s being measured.
In this process, you review, analyse, detect, modify, or remove ‘dirty’ data to make your dataset ‘clean’. Data cleaning is also called data cleansing or data scrubbing.
Data cleaning is necessary for valid and appropriate analyses. Dirty data contain inconsistencies or errors , but cleaning your data helps you minimise or resolve these.
Without data cleaning, you could end up with a Type I or II error in your conclusion. These types of erroneous conclusions can be practically significant with important consequences, because they lead to misplaced investments or missed opportunities.
Observer bias occurs when a researcher’s expectations, opinions, or prejudices influence what they perceive or record in a study. It usually affects studies when observers are aware of the research aims or hypotheses. This type of research bias is also called detection bias or ascertainment bias .
The observer-expectancy effect occurs when researchers influence the results of their own study through interactions with participants.
Researchers’ own beliefs and expectations about the study results may unintentionally influence participants through demand characteristics .
You can use several tactics to minimise observer bias .
Naturalistic observation is a valuable tool because of its flexibility, external validity , and suitability for topics that can’t be studied in a lab setting.
The downsides of naturalistic observation include its lack of scientific control , ethical considerations , and potential for bias from observers and subjects.
Naturalistic observation is a qualitative research method where you record the behaviours of your research subjects in real-world settings. You avoid interfering or influencing anything in a naturalistic observation.
You can think of naturalistic observation as ‘people watching’ with a purpose.
Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.
Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.
You can organise the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomisation can minimise the bias from order effects.
Questionnaires can be self-administered or researcher-administered.
Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or by post. All questions are standardised so that all respondents receive the same questions with identical wording.
Researcher-administered questionnaires are interviews that take place by phone, in person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.
In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:
Depending on your study topic, there are various other methods of controlling variables .
An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.
A true experiment (aka a controlled experiment) always includes at least one control group that doesn’t receive the experimental treatment.
However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group’s outcomes before and after a treatment (instead of comparing outcomes between different groups).
For strong internal validity , it’s usually best to include a control group if possible. Without a control group, it’s harder to be certain that the outcome was caused by the experimental treatment and not by other variables.
A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.
A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.
To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.
Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.
Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.
The type of data determines what statistical tests you should use to analyse your data.
A research hypothesis is your proposed answer to your research question. The research hypothesis usually includes an explanation (‘ x affects y because …’).
A statistical hypothesis, on the other hand, is a mathematical statement about a population parameter. Statistical hypotheses always come in pairs: the null and alternative hypotheses. In a well-designed study , the statistical hypotheses correspond logically to the research hypothesis.
A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.
A hypothesis is not just a guess. It should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations, and statistical analysis of data).
Cross-sectional studies are less expensive and time-consuming than many other types of study. They can provide useful insights into a population’s characteristics and identify correlations for further research.
Sometimes only cross-sectional data are available for analysis; other times your research question may only require a cross-sectional study to answer it.
Cross-sectional studies cannot establish a cause-and-effect relationship or analyse behaviour over a period of time. To investigate cause and effect, you need to do a longitudinal study or an experimental study .
Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.
Longitudinal study | Cross-sectional study |
---|---|
observations | Observations at a in time |
Observes the multiple times | Observes (a ‘cross-section’) in the population |
Follows in participants over time | Provides of society at a given point |
Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.
The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .
Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.
A correlation reflects the strength and/or direction of the association between two or more variables.
A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .
A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.
Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.
Controlled experiments establish causality, whereas correlational studies only show associations between variables.
In general, correlational research is high in external validity while experimental research is high in internal validity .
The third variable and directionality problems are two main reasons why correlation isn’t causation .
The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.
The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.
As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups . Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.
Overall, your focus group questions should be:
Social desirability bias is the tendency for interview participants to give responses that will be viewed favourably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .
Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.
This type of bias in research can also occur in observations if the participants know they’re being observed. They might alter their behaviour accordingly.
A focus group is a research method that brings together a small group of people to answer questions in a moderated setting. The group is chosen due to predefined demographic traits, and the questions are designed to shed light on a topic of interest. It is one of four types of interviews .
The four most common types of interviews are:
An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.
Unstructured interviews are best used when:
A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:
The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.
There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.
A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when:
More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .
When conducting research, collecting original data has significant advantages:
However, there are also some drawbacks: data collection can be time-consuming, labour-intensive, and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.
Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organisations.
A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.
A confounder is a third variable that affects variables of interest and makes them seem related when they are not. In contrast, a mediator is the mechanism of a relationship between two variables: it explains the process by which they are related.
If something is a mediating variable :
Including mediators and moderators in your research helps you go beyond studying a simple relationship between two variables for a fuller picture of the real world. They are important to consider when studying complex correlational or causal relationships.
Mediators are part of the causal pathway of an effect, and they tell you how or why an effect takes place. Moderators usually help you judge the external validity of your study by identifying the limitations of when the relationship between variables holds.
Discrete and continuous variables are two types of quantitative variables :
Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).
Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).
You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .
Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.
You want to find out how blood sugar levels are affected by drinking diet cola and regular cola, so you conduct an experiment .
No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both.
Yes, but including more than one of either type requires multiple research questions .
For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.
You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .
To ensure the internal validity of an experiment , you should only change one independent variable at a time.
To ensure the internal validity of your research, you must consider the impact of confounding variables. If you fail to account for them, you might over- or underestimate the causal relationship between your independent and dependent variables , or even find a causal relationship where none exists.
A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.
Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.
There are several methods you can use to decrease the impact of confounding variables on your research: restriction, matching, statistical control, and randomisation.
In restriction , you restrict your sample by only including certain subjects that have the same values of potential confounding variables.
In matching , you match each of the subjects in your treatment group with a counterpart in the comparison group. The matched subjects have the same values on any potential confounding variables, and only differ in the independent variable .
In statistical control , you include potential confounders as variables in your regression .
In randomisation , you randomly assign the treatment (or independent variable) in your study to a sufficiently large number of subjects, which allows you to control for all potential confounding variables.
In scientific research, concepts are the abstract ideas or phenomena that are being studied (e.g., educational achievement). Variables are properties or characteristics of the concept (e.g., performance at school), while indicators are ways of measuring or quantifying variables (e.g., yearly grade reports).
The process of turning abstract concepts into measurable variables and indicators is called operationalisation .
In statistics, ordinal and nominal variables are both considered categorical variables .
Even though ordinal data can sometimes be numerical, not all mathematical operations can be performed on them.
A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.
Control variables help you establish a correlational or causal relationship between variables by enhancing internal validity .
If you don’t control relevant extraneous variables , they may influence the outcomes of your study, and you may not be able to demonstrate that your results are really an effect of your independent variable .
‘Controlling for a variable’ means measuring extraneous variables and accounting for them statistically to remove their effects on other variables.
Researchers often model control variable data along with independent and dependent variable data in regression analyses and ANCOVAs . That way, you can isolate the control variable’s effects from the relationship between the variables of interest.
An extraneous variable is any variable that you’re not investigating that can potentially affect the dependent variable of your research study.
A confounding variable is a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.
There are 4 main types of extraneous variables :
The difference between explanatory and response variables is simple:
The term ‘ explanatory variable ‘ is sometimes preferred over ‘ independent variable ‘ because, in real-world contexts, independent variables are often influenced by other variables. This means they aren’t totally independent.
Multiple independent variables may also be correlated with each other, so ‘explanatory variables’ is a more appropriate term.
On graphs, the explanatory variable is conventionally placed on the x -axis, while the response variable is placed on the y -axis.
A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.
An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called ‘independent’ because it’s not influenced by any other variables in the study.
Independent variables are also called:
A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it ‘depends’ on your independent variable.
In statistics, dependent variables are also called:
Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .
In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.
Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.
Deductive reasoning is also called deductive logic.
Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.
Inductive reasoning is also called inductive logic or bottom-up reasoning.
In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.
Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.
Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.
There are many different types of inductive reasoning that people use formally or informally.
Here are a few common types:
It’s often best to ask a variety of people to review your measurements. You can ask experts, such as other researchers, or laypeople, such as potential participants, to judge the face validity of tests.
While experts have a deep understanding of research methods , the people you’re studying can provide you with valuable insights you may have missed otherwise.
Face validity is important because it’s a simple first step to measuring the overall validity of a test or technique. It’s a relatively intuitive, quick, and easy way to start checking whether a new measure seems useful at first glance.
Good face validity means that anyone who reviews your measure says that it seems to be measuring what it’s supposed to. With poor face validity, someone reviewing your measure may be left confused about what you’re measuring and why you’re using this method.
Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.
Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.
You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity .
When designing or evaluating a measure, construct validity helps you ensure you’re actually measuring the construct you’re interested in. If you don’t have construct validity, you may inadvertently measure unrelated or distinct constructs and lose precision in your research.
Construct validity is often considered the overarching type of measurement validity , because it covers all of the other types. You need to have face validity , content validity, and criterion validity to achieve construct validity.
Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity , which includes construct validity, face validity , and criterion validity.
There are two subtypes of construct validity.
Attrition bias can skew your sample so that your final sample differs significantly from your original sample. Your sample is biased because some groups from your population are underrepresented.
With a biased final sample, you may not be able to generalise your findings to the original population that you sampled from, so your external validity is compromised.
There are seven threats to external validity : selection bias , history, experimenter effect, Hawthorne effect , testing effect, aptitude-treatment, and situation effect.
The two types of external validity are population validity (whether you can generalise to other groups of people) and ecological validity (whether you can generalise to other situations and settings).
The external validity of a study is the extent to which you can generalise your findings to different groups of people, situations, and measures.
Attrition bias is a threat to internal validity . In experiments, differential rates of attrition between treatment and control groups can skew results.
This bias can affect the relationship between your independent and dependent variables . It can make variables appear to be correlated when they are not, or vice versa.
Internal validity is the extent to which you can be confident that a cause-and-effect relationship established in a study cannot be explained by other factors.
There are eight threats to internal validity : history, maturation, instrumentation, testing, selection bias , regression to the mean, social interaction, and attrition .
A sampling error is the difference between a population parameter and a sample statistic .
A statistic refers to measures about the sample , while a parameter refers to measures about the population .
Populations are used when a research question requires data from every member of the population. This is usually only feasible when the population is small and easily accessible.
Systematic sampling is a probability sampling method where researchers select members of the population at a regular interval – for example, by selecting every 15th person on a list of the population. If the population is in a random order, this can imitate the benefits of simple random sampling .
There are three key steps in systematic sampling :
Yes, you can create a stratified sample using multiple characteristics, but you must ensure that every participant in your study belongs to one and only one subgroup. In this case, you multiply the numbers of subgroups for each characteristic to get the total number of groups.
For example, if you were stratifying by location with three subgroups (urban, rural, or suburban) and marital status with five subgroups (single, divorced, widowed, married, or partnered), you would have 3 × 5 = 15 subgroups.
You should use stratified sampling when your sample can be divided into mutually exclusive and exhaustive subgroups that you believe will take on different mean values for the variable that you’re studying.
Using stratified sampling will allow you to obtain more precise (with lower variance ) statistical estimates of whatever you are trying to measure.
For example, say you want to investigate how income differs based on educational attainment, but you know that this relationship can vary based on race. Using stratified sampling, you can ensure you obtain a large enough sample from each racial group, allowing you to draw more precise conclusions.
In stratified sampling , researchers divide subjects into subgroups called strata based on characteristics that they share (e.g., race, gender, educational attainment).
Once divided, each subgroup is randomly sampled using another probability sampling method .
Multistage sampling can simplify data collection when you have large, geographically spread samples, and you can obtain a probability sample without a complete sampling frame.
But multistage sampling may not lead to a representative sample, and larger samples are needed for multistage samples to achieve the statistical properties of simple random samples .
In multistage sampling , you can use probability or non-probability sampling methods.
For a probability sample, you have to probability sampling at every stage. You can mix it up by using simple random sampling , systematic sampling , or stratified sampling to select units at different stages, depending on what is applicable and relevant to your study.
Cluster sampling is a probability sampling method in which you divide a population into clusters, such as districts or schools, and then randomly select some of these clusters as your sample.
The clusters should ideally each be mini-representations of the population as a whole.
There are three types of cluster sampling : single-stage, double-stage and multi-stage clustering. In all three types, you first divide the population into clusters, then randomly select clusters for use in your sample.
Cluster sampling is more time- and cost-efficient than other probability sampling methods , particularly when it comes to large samples spread across a wide geographical area.
However, it provides less statistical certainty than other methods, such as simple random sampling , because it is difficult to ensure that your clusters properly represent the population as a whole.
If properly implemented, simple random sampling is usually the best sampling method for ensuring both internal and external validity . However, it can sometimes be impractical and expensive to implement, depending on the size of the population to be studied,
If you have a list of every member of the population and the ability to reach whichever members are selected, you can use simple random sampling.
The American Community Survey is an example of simple random sampling . In order to collect detailed data on the population of the US, the Census Bureau officials randomly select 3.5 million households per year and use a variety of methods to convince them to fill out the survey.
Simple random sampling is a type of probability sampling in which the researcher randomly selects a subset of participants from a population . Each member of the population has an equal chance of being selected. Data are then collected from as large a percentage as possible of this random subset.
Sampling bias occurs when some members of a population are systematically more likely to be selected in a sample than others.
In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.
This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from county to city to neighbourhood) to create a sample that’s less expensive and time-consuming to collect data from.
In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.
Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling , and quota sampling .
Probability sampling means that every member of the target population has a known chance of being included in the sample.
Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .
Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.
While a between-subjects design has fewer threats to internal validity , it also requires more participants for high statistical power than a within-subjects design .
Advantages:
Disadvantages:
In a factorial design, multiple independent variables are tested.
If you test two variables, each level of one independent variable is combined with each level of the other independent variable to create different conditions.
Yes. Between-subjects and within-subjects designs can be combined in a single study when you have two or more independent variables (a factorial design). In a mixed factorial design, one variable is altered between subjects and another is altered within subjects.
Within-subjects designs have many potential threats to internal validity , but they are also very statistically powerful .
Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .
Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity as they can use real-world interventions instead of artificial laboratory settings.
In experimental research, random assignment is a way of placing participants from your sample into different groups using randomisation. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.
A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference between this and a true experiment is that the groups are not randomly assigned.
In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.
In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.
The word ‘between’ means that you’re comparing different conditions between groups, while the word ‘within’ means you’re comparing different conditions within the same group.
A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.
A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.
In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.
Triangulation can help:
But triangulation can also pose problems:
There are four main types of triangulation :
Experimental designs are a set of procedures that you plan in order to examine the relationship between variables that interest you.
To design a successful experiment, first identify:
When designing the experiment, first decide:
Exploratory research explores the main aspects of a new or barely researched question.
Explanatory research explains the causes and effects of an already widely researched question.
The key difference between observational studies and experiments is that, done correctly, an observational study will never influence the responses or behaviours of participants. Experimental designs will have a treatment condition applied to at least a portion of participants.
An observational study could be a good fit for your research if your research question is based on things you observe. If you have ethical, logistical, or practical concerns that make an experimental design challenging, consider an observational study. Remember that in an observational study, it is critical that there be no interference or manipulation of the research subjects. Since it’s not an experiment, there are no control or treatment groups either.
These are four of the most common mixed methods designs :
Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.
Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.
Operationalisation means turning abstract conceptual ideas into measurable observations.
For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.
Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.
Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.
There are five common approaches to qualitative research :
There are various approaches to qualitative data analysis , but they all share five steps in common:
The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .
In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .
Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.
Methods are the specific tools and procedures you use to collect and analyse data (e.g. experiments, surveys , and statistical tests ).
In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .
In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.
The research methods you use depend on the type of data you need to answer your research question .
Want to contact us directly? No problem. We are always here for you.
Our support team is here to help you daily via chat, WhatsApp, email, or phone between 9:00 a.m. to 11:00 p.m. CET.
Our APA experts default to APA 7 for editing and formatting. For the Citation Editing Service you are able to choose between APA 6 and 7.
Yes, if your document is longer than 20,000 words, you will get a sample of approximately 2,000 words. This sample edit gives you a first impression of the editor’s editing style and a chance to ask questions and give feedback.
You will receive the sample edit within 24 hours after placing your order. You then have 24 hours to let us know if you’re happy with the sample or if there’s something you would like the editor to do differently.
Read more about how the sample edit works
Yes, you can upload your document in sections.
We try our best to ensure that the same editor checks all the different sections of your document. When you upload a new file, our system recognizes you as a returning customer, and we immediately contact the editor who helped you before.
However, we cannot guarantee that the same editor will be available. Your chances are higher if
Please note that the shorter your deadline is, the lower the chance that your previous editor is not available.
If your previous editor isn’t available, then we will inform you immediately and look for another qualified editor. Fear not! Every Scribbr editor follows the Scribbr Improvement Model and will deliver high-quality work.
Yes, our editors also work during the weekends and holidays.
Because we have many editors available, we can check your document 24 hours per day and 7 days per week, all year round.
If you choose a 72 hour deadline and upload your document on a Thursday evening, you’ll have your thesis back by Sunday evening!
Yes! Our editors are all native speakers, and they have lots of experience editing texts written by ESL students. They will make sure your grammar is perfect and point out any sentences that are difficult to understand. They’ll also notice your most common mistakes, and give you personal feedback to improve your writing in English.
Every Scribbr order comes with our award-winning Proofreading & Editing service , which combines two important stages of the revision process.
For a more comprehensive edit, you can add a Structure Check or Clarity Check to your order. With these building blocks, you can customize the kind of feedback you receive.
You might be familiar with a different set of editing terms. To help you understand what you can expect at Scribbr, we created this table:
Types of editing | Available at Scribbr? |
---|---|
| This is the “proofreading” in Scribbr’s standard service. It can only be selected in combination with editing. |
| This is the “editing” in Scribbr’s standard service. It can only be selected in combination with proofreading. |
| Select the Structure Check and Clarity Check to receive a comprehensive edit equivalent to a line edit. |
| This kind of editing involves heavy rewriting and restructuring. Our editors cannot help with this. |
View an example
When you place an order, you can specify your field of study and we’ll match you with an editor who has familiarity with this area.
However, our editors are language specialists, not academic experts in your field. Your editor’s job is not to comment on the content of your dissertation, but to improve your language and help you express your ideas as clearly and fluently as possible.
This means that your editor will understand your text well enough to give feedback on its clarity, logic and structure, but not on the accuracy or originality of its content.
Good academic writing should be understandable to a non-expert reader, and we believe that academic editing is a discipline in itself. The research, ideas and arguments are all yours – we’re here to make sure they shine!
After your document has been edited, you will receive an email with a link to download the document.
The editor has made changes to your document using ‘Track Changes’ in Word. This means that you only have to accept or ignore the changes that are made in the text one by one.
It is also possible to accept all changes at once. However, we strongly advise you not to do so for the following reasons:
You choose the turnaround time when ordering. We can return your dissertation within 24 hours , 3 days or 1 week . These timescales include weekends and holidays. As soon as you’ve paid, the deadline is set, and we guarantee to meet it! We’ll notify you by text and email when your editor has completed the job.
Very large orders might not be possible to complete in 24 hours. On average, our editors can complete around 13,000 words in a day while maintaining our high quality standards. If your order is longer than this and urgent, contact us to discuss possibilities.
Always leave yourself enough time to check through the document and accept the changes before your submission deadline.
Scribbr is specialised in editing study related documents. We check:
Calculate the costs
The fastest turnaround time is 24 hours.
You can upload your document at any time and choose between four deadlines:
At Scribbr, we promise to make every customer 100% happy with the service we offer. Our philosophy: Your complaint is always justified – no denial, no doubts.
Our customer support team is here to find the solution that helps you the most, whether that’s a free new edit or a refund for the service.
Yes, in the order process you can indicate your preference for American, British, or Australian English .
If you don’t choose one, your editor will follow the style of English you currently use. If your editor has any questions about this, we will contact you.
Dependent Variable The variable that depends on other factors that are measured. These variables are expected to change as a result of an experimental manipulation of the independent variable or variables. It is the presumed effect.
Independent Variable The variable that is stable and unaffected by the other variables you are trying to measure. It refers to the condition of an experiment that is systematically manipulated by the investigator. It is the presumed cause.
Cramer, Duncan and Dennis Howitt. The SAGE Dictionary of Statistics . London: SAGE, 2004; Penslar, Robin Levin and Joan P. Porter. Institutional Review Board Guidebook: Introduction . Washington, DC: United States Department of Health and Human Services, 2010; "What are Dependent and Independent Variables?" Graphic Tutorial.
Don't feel bad if you are confused about what is the dependent variable and what is the independent variable in social and behavioral sciences research . However, it's important that you learn the difference because framing a study using these variables is a common approach to organizing the elements of a social sciences research study in order to discover relevant and meaningful results. Specifically, it is important for these two reasons:
A variable in research simply refers to a person, place, thing, or phenomenon that you are trying to measure in some way. The best way to understand the difference between a dependent and independent variable is that the meaning of each is implied by what the words tell us about the variable you are using. You can do this with a simple exercise from the website, Graphic Tutorial. Take the sentence, "The [independent variable] causes a change in [dependent variable] and it is not possible that [dependent variable] could cause a change in [independent variable]." Insert the names of variables you are using in the sentence in the way that makes the most sense. This will help you identify each type of variable. If you're still not sure, consult with your professor before you begin to write.
Fan, Shihe. "Independent Variable." In Encyclopedia of Research Design. Neil J. Salkind, editor. (Thousand Oaks, CA: SAGE, 2010), pp. 592-594; "What are Dependent and Independent Variables?" Graphic Tutorial; Salkind, Neil J. "Dependent Variable." In Encyclopedia of Research Design , Neil J. Salkind, editor. (Thousand Oaks, CA: SAGE, 2010), pp. 348-349;
The process of examining a research problem in the social and behavioral sciences is often framed around methods of analysis that compare, contrast, correlate, average, or integrate relationships between or among variables . Techniques include associations, sampling, random selection, and blind selection. Designation of the dependent and independent variable involves unpacking the research problem in a way that identifies a general cause and effect and classifying these variables as either independent or dependent.
The variables should be outlined in the introduction of your paper and explained in more detail in the methods section . There are no rules about the structure and style for writing about independent or dependent variables but, as with any academic writing, clarity and being succinct is most important.
After you have described the research problem and its significance in relation to prior research, explain why you have chosen to examine the problem using a method of analysis that investigates the relationships between or among independent and dependent variables . State what it is about the research problem that lends itself to this type of analysis. For example, if you are investigating the relationship between corporate environmental sustainability efforts [the independent variable] and dependent variables associated with measuring employee satisfaction at work using a survey instrument, you would first identify each variable and then provide background information about the variables. What is meant by "environmental sustainability"? Are you looking at a particular company [e.g., General Motors] or are you investigating an industry [e.g., the meat packing industry]? Why is employee satisfaction in the workplace important? How does a company make their employees aware of sustainability efforts and why would a company even care that its employees know about these efforts?
Identify each variable for the reader and define each . In the introduction, this information can be presented in a paragraph or two when you describe how you are going to study the research problem. In the methods section, you build on the literature review of prior studies about the research problem to describe in detail background about each variable, breaking each down for measurement and analysis. For example, what activities do you examine that reflect a company's commitment to environmental sustainability? Levels of employee satisfaction can be measured by a survey that asks about things like volunteerism or a desire to stay at the company for a long time.
The structure and writing style of describing the variables and their application to analyzing the research problem should be stated and unpacked in such a way that the reader obtains a clear understanding of the relationships between the variables and why they are important. This is also important so that the study can be replicated in the future using the same variables but applied in a different way.
Fan, Shihe. "Independent Variable." In Encyclopedia of Research Design. Neil J. Salkind, editor. (Thousand Oaks, CA: SAGE, 2010), pp. 592-594; "What are Dependent and Independent Variables?" Graphic Tutorial; “Case Example for Independent and Dependent Variables.” ORI Curriculum Examples. U.S. Department of Health and Human Services, Office of Research Integrity; Salkind, Neil J. "Dependent Variable." In Encyclopedia of Research Design , Neil J. Salkind, editor. (Thousand Oaks, CA: SAGE, 2010), pp. 348-349; “Independent Variables and Dependent Variables.” Karl L. Wuensch, Department of Psychology, East Carolina University [posted email exchange]; “Variables.” Elements of Research. Dr. Camille Nebeker, San Diego State University.
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
Methodology
Published on September 19, 2022 by Rebecca Bevans . Revised on June 21, 2023.
In statistical research , a variable is defined as an attribute of an object of study. Choosing which variables to measure is central to good experimental design .
If you want to test whether some plant species are more salt-tolerant than others, some key variables you might measure include the amount of salt you add to the water, the species of plants being studied, and variables related to plant health like growth and wilting .
You need to know which types of variables you are working with in order to choose appropriate statistical tests and interpret the results of your study.
You can usually identify the type of variable by asking two questions:
Types of data: quantitative vs categorical variables, parts of the experiment: independent vs dependent variables, other common types of variables, other interesting articles, frequently asked questions about variables.
Data is a specific measurement of a variable – it is the value you record in your data sheet. Data is generally divided into two categories:
A variable that contains quantitative data is a quantitative variable ; a variable that contains categorical data is a categorical variable . Each of these types of variables can be broken down into further types.
When you collect quantitative data, the numbers you record represent real amounts that can be added, subtracted, divided, etc. There are two types of quantitative variables: discrete and continuous .
Type of variable | What does the data represent? | Examples |
---|---|---|
Discrete variables (aka integer variables) | Counts of individual items or values. | |
Continuous variables (aka ratio variables) | Measurements of continuous or non-finite values. |
Categorical variables represent groupings of some kind. They are sometimes recorded as numbers, but the numbers represent categories rather than actual amounts of things.
There are three types of categorical variables: binary , nominal , and ordinal variables .
Type of variable | What does the data represent? | Examples |
---|---|---|
Binary variables (aka dichotomous variables) | Yes or no outcomes. | |
Nominal variables | Groups with no rank or order between them. | |
Ordinal variables | Groups that are ranked in a specific order. | * |
*Note that sometimes a variable can work as more than one type! An ordinal variable can also be used as a quantitative variable if the scale is numeric and doesn’t need to be kept as discrete integers. For example, star ratings on product reviews are ordinal (1 to 5 stars), but the average star rating is quantitative.
To keep track of your salt-tolerance experiment, you make a data sheet where you record information about the variables in the experiment, like salt addition and plant health.
To gather information about plant responses over time, you can fill out the same data sheet every few days until the end of the experiment. This example sheet is color-coded according to the type of variable: nominal , continuous , ordinal , and binary .
Discover proofreading & editing
Experiments are usually designed to find out what effect one variable has on another – in our example, the effect of salt addition on plant growth.
You manipulate the independent variable (the one you think might be the cause ) and then measure the dependent variable (the one you think might be the effect ) to find out what this effect might be.
You will probably also have variables that you hold constant ( control variables ) in order to focus on your experimental treatment.
Type of variable | Definition | Example (salt tolerance experiment) |
---|---|---|
Independent variables (aka treatment variables) | Variables you manipulate in order to affect the outcome of an experiment. | The amount of salt added to each plant’s water. |
Dependent variables (aka ) | Variables that represent the outcome of the experiment. | Any measurement of plant health and growth: in this case, plant height and wilting. |
Control variables | Variables that are held constant throughout the experiment. | The temperature and light in the room the plants are kept in, and the volume of water given to each plant. |
In this experiment, we have one independent and three dependent variables.
The other variables in the sheet can’t be classified as independent or dependent, but they do contain data that you will need in order to interpret your dependent and independent variables.
When you do correlational research , the terms “dependent” and “independent” don’t apply, because you are not trying to establish a cause and effect relationship ( causation ).
However, there might be cases where one variable clearly precedes the other (for example, rainfall leads to mud, rather than the other way around). In these cases you may call the preceding variable (i.e., the rainfall) the predictor variable and the following variable (i.e. the mud) the outcome variable .
Once you have defined your independent and dependent variables and determined whether they are categorical or quantitative, you will be able to choose the correct statistical test .
But there are many other ways of describing variables that help with interpreting your results. Some useful types of variables are listed below.
Type of variable | Definition | Example (salt tolerance experiment) |
---|---|---|
A variable that hides the true effect of another variable in your experiment. This can happen when another variable is closely related to a variable you are interested in, but you haven’t controlled it in your experiment. Be careful with these, because confounding variables run a high risk of introducing a variety of to your work, particularly . | Pot size and soil type might affect plant survival as much or more than salt additions. In an experiment you would control these potential confounders by holding them constant. | |
Latent variables | A variable that can’t be directly measured, but that you represent via a proxy. | Salt tolerance in plants cannot be measured directly, but can be inferred from measurements of plant health in our salt-addition experiment. |
Composite variables | A variable that is made by combining multiple variables in an experiment. These variables are created when you analyze data, not when you measure it. | The three plant health variables could be combined into a single plant-health score to make it easier to present your findings. |
If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.
Research bias
Professional editors proofread and edit your paper by focusing on:
See an example
You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .
In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:
Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .
A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.
A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.
In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.
Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).
Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).
You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .
Discrete and continuous variables are two types of quantitative variables :
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
Bevans, R. (2023, June 21). Types of Variables in Research & Statistics | Examples. Scribbr. Retrieved September 9, 2024, from https://www.scribbr.com/methodology/types-of-variables/
Other students also liked, independent vs. dependent variables | definition & examples, confounding variables | definition, examples & controls, control variables | what are they & why do they matter, what is your plagiarism score.
Independent and Dependent Variables
In an experiment, the independent variable is the variable that is varied or manipulated by the researcher.
The dependent variable is the response that is measured. One way to think about it is that the dependent variable depends on the change in the independent variable. In theory, a change in the independent variable will lead to a change in the dependent variable.
In a study of how different doses of a drug affect the severity of symptoms, a researcher could compare the frequency and intensity of symptoms when different doses are administered.
Here the independent variable is the dose and the dependent variable is the frequency/intensity of symptoms .
The rudder on a boat directs the course of the boat. By changing the position of the rudder (turning it left or right), the rudder moves a certain way in the water, and that movement changes the trajectory of the boat.
Here the independent variable is the rudder , while the dependent variable is the trajectory of the boat.
Tips:
Independent and dependent variables are often referred to in other ways. For instance, independent variables are sometimes called experimental variables or predictor variables. Dependent variables are sometimes called outcome variables.
One way to differentiate between whether a variable is independent or dependent is to consider when each variable occurred. Typically, the independent variable will be the variable that happened earlier. Meaning, if I am looking at a dataset that has a variable for the year someone was born and a variable for their level of happiness in 2019, it’s a good bet that the birth year is the independent variable because it happened before the current measure of happiness in 2019 (assuming we are not surveying newborn babies). In effect, this question would be measuring whether someone’s year of birth (maybe translated as generation affiliation) relates to how happy they are in 2019.
Examples of Independent and Dependent Variables
Variables in psychology are things that can be changed or altered, such as a characteristic or value. Variables are generally used in psychology experiments to determine if changes to one thing result in changes to another.
Variables in psychology play a critical role in the research process. By systematically changing some variables in an experiment and measuring what happens as a result, researchers are able to learn more about cause-and-effect relationships.
The two main types of variables in psychology are the independent variable and the dependent variable. Both variables are important in the process of collecting data about psychological phenomena.
This article discusses different types of variables that are used in psychology research. It also covers how to operationalize these variables when conducting experiments.
Students often report problems with identifying the independent and dependent variables in an experiment. While this task can become more difficult as the complexity of an experiment increases, in a psychology experiment:
So how do you differentiate between the independent and dependent variables? Start by asking yourself what the experimenter is manipulating. The things that change, either naturally or through direct manipulation from the experimenter, are generally the independent variables. What is being measured? The dependent variable is the one that the experimenter is measuring.
Intervening variables, also sometimes called intermediate or mediator variables, are factors that play a role in the relationship between two other variables. In the previous example, sleep problems in university students are often influenced by factors such as stress. As a result, stress might be an intervening variable that plays a role in how much sleep people get, which may then influence how well they perform on exams.
Independent and dependent variables are not the only variables present in many experiments. In some cases, extraneous variables may also play a role. This type of variable is one that may have an impact on the relationship between the independent and dependent variables.
For example, in our previous example of an experiment on the effects of sleep deprivation on test performance, other factors such as age, gender, and academic background may have an impact on the results. In such cases, the experimenter will note the values of these extraneous variables so any impact can be controlled for.
There are two basic types of extraneous variables:
Other extraneous variables include the following:
In many cases, extraneous variables are controlled for by the experimenter. A controlled variable is one that is held constant throughout an experiment.
In the case of participant variables, the experiment might select participants that are the same in background and temperament to ensure that these factors don't interfere with the results. Holding these variables constant is important for an experiment because it allows researchers to be sure that all other variables remain the same across all conditions.
Using controlled variables means that when changes occur, the researchers can be sure that these changes are due to the manipulation of the independent variable and not caused by changes in other variables.
It is important to also note that a controlled variable is not the same thing as a control group . The control group in a study is the group of participants who do not receive the treatment or change in the independent variable.
All other variables between the control group and experimental group are held constant (i.e., they are controlled). The dependent variable being measured is then compared between the control group and experimental group to see what changes occurred because of the treatment.
If a variable cannot be controlled for, it becomes what is known as a confounding variabl e. This type of variable can have an impact on the dependent variable, which can make it difficult to determine if the results are due to the influence of the independent variable, the confounding variable, or an interaction of the two.
An operational definition describes how the variables are measured and defined in the study. Before conducting a psychology experiment , it is essential to create firm operational definitions for both the independent variable and dependent variables.
For example, in our imaginary experiment on the effects of sleep deprivation on test performance, we would need to create very specific operational definitions for our two variables. If our hypothesis is "Students who are sleep deprived will score significantly lower on a test," then we would have a few different concepts to define:
Once all the variables are operationalized, we're ready to conduct the experiment.
Variables play an important part in psychology research. Manipulating an independent variable and measuring the dependent variable allows researchers to determine if there is a cause-and-effect relationship between them.
Understanding the different types of variables used in psychology research is important if you want to conduct your own psychology experiments. It is also helpful for people who want to better understand what the results of psychology research really mean and become more informed consumers of psychology information .
Independent and dependent variables are used in experimental research. Unlike some other types of research (such as correlational studies ), experiments allow researchers to evaluate cause-and-effect relationships between two variables.
Researchers can use statistical analyses to determine the strength of a relationship between two variables in an experiment. Two of the most common ways to do this are to calculate a p-value or a correlation. The p-value indicates if the results are statistically significant while the correlation can indicate the strength of the relationship.
In an experiment on how sugar affects short-term memory, sugar intake would be the independent variable and scores on a short-term memory task would be the independent variable.
In an experiment looking at how caffeine intake affects test anxiety, the amount of caffeine consumed before a test would be the independent variable and scores on a test anxiety assessment would be the dependent variable.
Just as with other types of research, the independent variable in a cognitive psychology study would be the variable that the researchers manipulate. The specific independent variable would vary depending on the specific study, but it might be focused on some aspect of thinking, memory, attention, language, or decision-making.
American Psychological Association. Operational definition . APA Dictionary of Psychology.
American Psychological Association. Mediator . APA Dictionary of Psychology.
Altun I, Cınar N, Dede C. The contributing factors to poor sleep experiences in according to the university students: A cross-sectional study . J Res Med Sci . 2012;17(6):557-561. PMID:23626634
Skelly AC, Dettori JR, Brodt ED. Assessing bias: The importance of considering confounding . Evid Based Spine Care J . 2012;3(1):9-12. doi:10.1055/s-0031-1298595
By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."
You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.
All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.
Original Submission Date Received: .
Find support for a specific problem in the support section of our website.
Please let us know what you think of our products and services.
Visit our dedicated information section to learn more about MDPI.
Effectiveness of eye movement desensitization and reprocessing-emdr method in patients with chronic subjective tinnitus.
2. material and methods, participants and procedure, 3.1. exclusion criteria, 3.2. measurements, 3.2.1. tinnitus handicap inventory (thi), 3.2.2. visual analog scale (vas), 3.2.3. treatment protocol, tinnitus masking therapy (tmt), 3.3. analysis of data, 5. discussion, 6. conclusions, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.
Click here to enlarge figure
Variables | 1. Group EMDR and Masking | 2. Group Masking and EMDR | |||||||
---|---|---|---|---|---|---|---|---|---|
n | % | X | p | n | %. | X | p | ||
Sex | Male | 7 | 58 | 0.333 | 0.564 | 75 | 3.000 | 0.083 | |
Female | 5 | 42 | 3 | 25 | |||||
Total | 12 | 100 | 12 | 100 | |||||
Marital status | Married | 10 | 83 | 5.333 | 0.021 | 10 | 83 | 5.333 | 0.021 |
Single | 2 | 17 | 2 | 17 | |||||
Total | 12 | 100 | 12 | 100 | |||||
Number of children | None | 3 | 25 | 2.667 | 0.446 | 2 | 17 | 2.000 | 0.572 |
1 Child | 1 | 8 | 3 | 25 | |||||
2 Children | 5 | 42 | 2 | 17 | |||||
3 Children | 3 | 25 | 5 | 42 | |||||
Total | 12 | 100 | 12 | 100 | |||||
Job | Retired | 1 | 8 | 4.000 | 0.549 | x * | X * | 11.333 | 0.023 |
Officer | 3 | 25 | 2 | 17 | |||||
Student | 2 | 17 | 1 | 8 | |||||
Other | 1 | 8 | 1 | 8 | |||||
Housewife | 4 | 33 | 1 | 8 | |||||
Self-Employment | 1 | 8 | 8 | 58 | |||||
Total | 12 | 100 | 12 | 100 | |||||
Age | 15–20 age | 2 | 17 | 2.167 | 0.705 | x | x | 4.457 | 0.198 |
21–30 age | 1 | 8 | 1 | 8 | |||||
31–40 age | 2 | 17 | 3 | 25 | |||||
41–50 age | 3 | 25 | 6 | 50 | |||||
51 age and up | 4 | 33 | 2 | 17 | |||||
Total | 12 | 100 | 12 | 100 |
1. Group EMDR and Masking | 2. Group Masking and EMDR | ||||||||
---|---|---|---|---|---|---|---|---|---|
Variables | n | % | X | p | n | % | X | p | |
Causality | Family problems | 7 | 58 | 4.500 | 0.105 | 3 | 25 | 3.000 | 0.83 |
Trauma | 4 | 33 | x * | X * | |||||
Other | 1 | 8 | 9 | 75 | |||||
Total | 12 | 100 | 12 | 100 | |||||
Similarity | Wind | 1 | 10 | 6.000 | 0.199 | x | x | 8.727 | 0.190 |
Buzzing | 5 | 50 | 5 | 45 | |||||
Engine | 1 | 10 | 1 | 9 | |||||
Water | 2 | 20 | 1 | 9 | |||||
Ultrason | 1 | 10 | x | x | |||||
Bird | x | x | 1 | 9 | |||||
Bell | x | X | 1 | 9 | |||||
Whistle | x | x | 1 | 9 | |||||
Insect | x | X | 1 | 9 | |||||
Total | 10 | 100 | 11 | 100 | |||||
Onset | One year | 3 | 25 | 4.500 | 0.105 | 6 | 50 | 3.000 | 0.083 |
Two years | 1 | 8 | 3 | 25 | |||||
Three years | 2 | 17 | 1 | 8 | |||||
Four years | x | x | 1 | 8 | |||||
Five years and up | 6 | 50 | 1 | 8 | |||||
Total | 12 | 100 | 12 | 100 | |||||
Localization | Right | 3 | 25 | 3.000 | 0.083 | 7 | 58 | 3.500 | 0.174 |
Bilateral | 9 | 75 | 3 | 25 | |||||
Left | x | x | 2 | 17 | |||||
Total | 12 | 100 | 12 | 100 | |||||
Hearing loss | − | 0 | 0 | 8.333 | 0.004 | 0 | 0 | 8.333 | 0.004 |
+ | 12 | 100 | 12 | 100 | |||||
Total | 12 | 100 | 12 | 100 |
Groups | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
Group 1 EMDR | Group 1 Masking | 2. Group Masking | Group 2 EMDR | 3. Group Therapy Not Applied | ||||||
Mean | SD | Mean | SD | Mean | SD | Mean | SD | Mean | SD | |
THI Functional | 21.33 | 9.04 | 11.50 | 8.74 | 21.33 | 0.73 | 21.67 | 12.24 | 21.83 | 12.49 |
THI Emotional | 21.33 | 7.92 | 8.92 | 8.08 | 17.83 | 9.51 | 18.83 | 8.46 | 18.33 | 9.06 |
THI Catastrotrophic | 11.67 | 6.14 | 4.00 | 3.81 | 11.83 | 4.93 | 11.17 | 4.93 | 8.67 | 5.28 |
Total THI score | 54.33 | 19.39 | 24.42 | 19.69 | 51.00 | 24.2 | 51.67 | 23.57 | 48.83 | 24.80 |
Total VAS Score | 25.50 | 4.08 | 8.33 | 6.31 | 25.58 | 2.78 | 24.33 | 5.48 | 18.92 | 6.13 |
Scales | Groups Pre-Pro Test | Median | IQR | Decrease from THI1 to THI2 | z | p-Value | |
---|---|---|---|---|---|---|---|
THI | 1. Group EMDR | THI Pre-test | 54.33 | 55 | 55% | −3.062 | 0.002 |
THI Pro-test | 24.83 | 22 | 22% | ||||
1. Group Masking | THI Pre-test | 24.41 | 21 | 21% | −0.052 | 0.959 | |
THI Pro-test | 24.33 | 21 | 21% | ||||
2. Group Masking | THI Pre-test | 51.00 | 53 | 53% | −1.284 | 0.199 | |
THI Pro-test | 53.66 | 60 | 60% | ||||
2. Group EMDR | THI Pre-test | 51.66 | 58 | 58% | −3.061 | 0.002 | |
THI Pro-test | 14.16 | 12 | 12% | ||||
3. Group Control | THI Pre-test | 48.83 | 49 | 49% | −0.423 | 0.673 | |
THI Protest | 50.25 | 47 | 47% | ||||
VAS | 1. Group EMDR | VAS Pre-test | 25.50 | 26 | 26% | −3.063 | 0.002 |
VAS Protest | 7.83 | 6 | 6% | ||||
1. Group Masking | VAS Pre-test | 8.33 | 7 | 7% | −1.632 | 0.103 | |
VAs Pro-test | 8.33 | 7 | 7% | ||||
2. Group Masking | VAS Pre-test | 25.58 | 27 | 27% | −1.342 | 0.180 | |
VAS Pro-test | 24.50 | 26 | 26% | ||||
2. Group EMDR | VAS pre-test | 24.33 | 25 | 25% | −3.063 | 0.002 | |
VAS Pro-test | 8.50 | 9 | 9% | ||||
3. Group Control | VAS Pre-test | 18.91 | 21 | 21% | −1.632 | 0.102 | |
VAS Pre-test | 18.58 | 21 | 21% |
Scales | Groups | n | Median | IQR | Min. | Max. | Mean Rank | Sum of Rank | U | p-Value |
---|---|---|---|---|---|---|---|---|---|---|
THI | 1. Group EMDR | 12 | 21.11 | 3 | 1 | 6 | 7.42 | 89.50 | 11,000 | 0.002 |
1. Group Masking | 12 | 37.00 | 24 | 1 | 6 | 17.58 | 211.50 | |||
Total | 24 | |||||||||
2. Group Masking | 12 | 33.29 | 26 | 1 | 6 | 16.50 | 198.00 | 14,000 | 0.023 | |
2. Group EMDR | 12 | 13.89 | 7 | 1 | 5 | 8.50 | 67.00 | |||
Total | 24 | |||||||||
1. Group EMDR | 12 | 16.77 | 6 | 1 | 6 | 21.22 | 94.00 | 16,000 | 0.007 | |
3. Group masking Control | 12 | 38.77 | 28 | 1 | 6 | 45.50 | 176.00 | |||
Total | 24 | |||||||||
VAS | 1. Group Masking | 12 | 32.44 | 24 | 1 | 5 | 32.33 | 178.00 | 20,000 | 0.504 |
1. Group EMDR | 12 | 35.45 | 23 | 1 | 6 | 32.67 | 162.00 | |||
Total | 24 | |||||||||
2. Group Masking | 12 | 34.71 | 1 | 6 | 33.17 | 128.00 | 24,000 | 0.607 | ||
3. Group EMDR | 12 | 41.22 | 1 | 6 | 31.83 | 132.00 | ||||
Total | 24 | |||||||||
2. Group Masking | 12 | 23.331 | 1 | 6 | 18.83 | 126.00 | 21,000 | 0.012 | ||
3. Group Control | 12 | 41.229 | 1 | 6 | 36.17 | 134.00 | ||||
Total | 24 |
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
Bal, F.; Kırış, M. Effectiveness of Eye Movement Desensitization and Reprocessing-EMDR Method in Patients with Chronic Subjective Tinnitus. Brain Sci. 2024 , 14 , 918. https://doi.org/10.3390/brainsci14090918
Bal F, Kırış M. Effectiveness of Eye Movement Desensitization and Reprocessing-EMDR Method in Patients with Chronic Subjective Tinnitus. Brain Sciences . 2024; 14(9):918. https://doi.org/10.3390/brainsci14090918
Bal, Fatih, and Muzaffer Kırış. 2024. "Effectiveness of Eye Movement Desensitization and Reprocessing-EMDR Method in Patients with Chronic Subjective Tinnitus" Brain Sciences 14, no. 9: 918. https://doi.org/10.3390/brainsci14090918
Further information, mdpi initiatives, follow mdpi.
Subscribe to receive issue release notifications and newsletters from MDPI journals
IMAGES
VIDEO
COMMENTS
The independent variable is the cause. Its value is independent of other variables in your study. The dependent variable is the effect. Its value depends on changes in the independent variable. Example: Independent and dependent variables. You design a study to test whether changes in room temperature have an effect on math test scores.
Definition: Independent variable is a variable that is manipulated or changed by the researcher to observe its effect on the dependent variable. It is also known as the predictor variable or explanatory variable. The independent variable is the presumed cause in an experiment or study, while the dependent variable is the presumed effect or outcome.
The independent variable may be called the "controlled variable" because it is the one that is changed or controlled. ... Plot or graph independent and dependent variables using the standard method. The independent variable is the x-axis, while the dependent variable is the y-axis. ... The Practice of Social Research (12th ed.) Wadsworth ...
The independent variable is the cause and the dependent variable is the effect, that is, independent variables influence dependent variables. In research, a dependent variable is the outcome of interest of the study and the independent variable is the factor that may influence the outcome. Let's explain this with an independent and dependent ...
In research, the independent variable is manipulated to observe its effect, while the dependent variable is the measured outcome. Essentially, the independent variable is the presumed cause, and the dependent variable is the observed effect. Variables provide the foundation for examining relationships, drawing conclusions, and making ...
While the independent variable is the " cause ", the dependent variable is the " effect " - or rather, the affected variable. In other words, the dependent variable is the variable that is assumed to change as a result of a change in the independent variable. Keeping with the previous example, let's look at some dependent variables ...
The independent variable is the catalyst, the initial spark that sets the wheels of research in motion. Dependent Variable. The dependent variable is the outcome we observe and measure. It's the altered flavor of the soup that results from the chef's culinary experiments.
The independent variable (IV) in psychology is the characteristic of an experiment that is manipulated or changed by researchers, not by other variables in the experiment. For example, in an experiment looking at the effects of studying on test scores, studying would be the independent variable. Researchers are trying to determine if changes to ...
Definition and Examples. The independent variable is recorded on the x-axis of a graph. The effect on the dependent variable is recorded on the y-axis. The independent variable is the variable that is controlled or changed in a scientific experiment to test its effect on the dependent variable. It doesn't depend on another variable and isn ...
Independent variables and dependent variables are the two fundamental types of variables in statistical modeling and experimental designs. Analysts use these methods to understand the relationships between the variables and estimate effect sizes. What effect does one variable have on another? ... That process involves in-depth research and many ...
Independent and Dependent Variables, Explained With Examples. Written by MasterClass. Last updated: Mar 21, 2022 • 4 min read. In experiments that test cause and effect, two types of variables come into play. One is an independent variable and the other is a dependent variable, and together they play an integral role in research design.
Here are some examples of an independent variable. A scientist is testing the effect of light and dark on the behavior of moths by turning a light on and off. The independent variable is the amount of light (cause) and the moth's reaction is the dependent variable (the effect). In a study to determine the effect of temperature on plant ...
Independent and Dependent Variable Examples. In a study to determine whether the amount of time a student sleeps affects test scores, the independent variable is the amount of time spent sleeping while the dependent variable is the test score. You want to compare brands of paper towels to see which holds the most liquid.
Independent and dependent variables in research. In experimental research, a variable refers to the phenomenon, person, or thing that is being measured and observed by the researcher. A researcher conducts a study to see how one variable affects another and make assertions about the relationship between different variables.
An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It's called "independent" because it's not influenced by any other variables in the study. Independent variables are also called: Right-hand-side variables (they appear on the right-hand side of a regression equation).
Extraneous variables (from Adjei, n.d.) While it is very common to hear the terms independent and dependent variable, extraneous variables are less common, which is surprising because an extraneous variable can destroy the integrity of a research study that claims to show a cause and effect relationship. An extraneous variable is a variable that may compete with the independent variable in ...
The independent variable is the cause. Its value is independent of other variables in your study. The dependent variable is the effect. Its value depends on changes in the independent variable. Example: Independent and dependent variables. You design a study to test whether changes in room temperature have an effect on maths test scores.
The independent variable is the involvement in after-school math tutoring sessions. Organization context: You may want to know if the color of an office affects work efficiency. Your research will consider a group of employees working in white or yellow rooms. The independent variable is the color of the office.
The independent variable is the amount of nutrients added to the crop field. The dependent variable is the biomass of the crops at harvest time. Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design.
Designation of the dependent and independent variable involves unpacking the research problem in a way that identifies a general cause and effect and classifying these variables as either independent or dependent. The variables should be outlined in the introduction of your paper and explained in more detail in the methods section. There are no ...
Example (salt tolerance experiment) Independent variables (aka treatment variables) Variables you manipulate in order to affect the outcome of an experiment. The amount of salt added to each plant's water. Dependent variables (aka response variables) Variables that represent the outcome of the experiment.
Scientific Method. Independent and Dependent Variables. In an experiment, the independent variable is the variable that is varied or manipulated by the researcher. The dependent variable is the response that is measured. One way to think about it is that the dependent variable depends on the change in the independent variable.
By systematically changing some variables in an experiment and measuring what happens as a result, researchers are able to learn more about cause-and-effect relationships. The two main types of variables in psychology are the independent variable and the dependent variable. Both variables are important in the process of collecting data about ...
The study's dependent variable was the tinnitus levels of the participants, and the independent variable was EMDR and the Masking method. The dependent variable data of the study was collected with the Visual Analog Scale and Tinnitus Handicap Inventory (THI). EMDR and Masking methods used as independent variables in the study were conducted ...