inductive and deductive reasoning in critical thinking

JavaScript seems to be disabled in your browser. For the best experience on our site, be sure to turn on Javascript in your browser.

  • Order Tracking
  • Create an Account

inductive and deductive reasoning in critical thinking

200+ Award-Winning Educational Textbooks, Activity Books, & Printable eBooks!

  • Compare Products

Reading, Writing, Math, Science, Social Studies

  • Search by Book Series
  • Algebra I & II  Gr. 7-12+
  • Algebra Magic Tricks  Gr. 2-12+
  • Algebra Word Problems  Gr. 7-12+
  • Balance Benders  Gr. 2-12+
  • Balance Math & More!  Gr. 2-12+
  • Basics of Critical Thinking  Gr. 4-7
  • Brain Stretchers  Gr. 5-12+
  • Building Thinking Skills  Gr. Toddler-12+
  • Building Writing Skills  Gr. 3-7
  • Bundles - Critical Thinking  Gr. PreK-9
  • Bundles - Language Arts  Gr. K-8
  • Bundles - Mathematics  Gr. PreK-9
  • Bundles - Multi-Subject Curriculum  Gr. PreK-12+
  • Bundles - Test Prep  Gr. Toddler-12+
  • Can You Find Me?  Gr. PreK-1
  • Complete the Picture Math  Gr. 1-3
  • Cornell Critical Thinking Tests  Gr. 5-12+
  • Cranium Crackers  Gr. 3-12+
  • Creative Problem Solving  Gr. PreK-2
  • Critical Thinking Activities to Improve Writing  Gr. 4-12+
  • Critical Thinking Coloring  Gr. PreK-2
  • Critical Thinking Detective  Gr. 3-12+
  • Critical Thinking Tests  Gr. PreK-6
  • Critical Thinking for Reading Comprehension  Gr. 1-5
  • Critical Thinking in United States History  Gr. 6-12+
  • CrossNumber Math Puzzles  Gr. 4-10
  • Crypt-O-Words  Gr. 2-7
  • Crypto Mind Benders  Gr. 3-12+
  • Daily Mind Builders  Gr. 5-12+
  • Dare to Compare Math  Gr. 2-7
  • Developing Critical Thinking through Science  Gr. 1-8
  • Dr. DooRiddles  Gr. PreK-12+
  • Dr. Funster's  Gr. 2-12+
  • Editor in Chief  Gr. 2-12+
  • Fun-Time Phonics!  Gr. PreK-2
  • Half 'n Half Animals  Gr. K-4
  • Hands-On Thinking Skills  Gr. K-1
  • Inference Jones  Gr. 1-6
  • James Madison  Gr. 10-12+
  • Jumbles  Gr. 3-5
  • Language Mechanic  Gr. 4-7
  • Language Smarts  Gr. 1-4
  • Mastering Logic & Math Problem Solving  Gr. 6-9
  • Math Analogies  Gr. K-9
  • Math Detective  Gr. 3-8
  • Math Games  Gr. 3-8
  • Math Mind Benders  Gr. 5-12+
  • Math Ties  Gr. 4-8
  • Math Word Problems  Gr. 4-10
  • Mathematical Reasoning  Gr. Toddler-11
  • Middle School Science  Gr. 6-8
  • Mind Benders  Gr. PreK-12+
  • Mind Building Math  Gr. K-1
  • Mind Building Reading  Gr. K-1
  • Novel Thinking  Gr. 3-6
  • OLSAT® Test Prep  Gr. PreK-K
  • Organizing Thinking  Gr. 2-8
  • Pattern Explorer  Gr. 3-9
  • Practical Critical Thinking  Gr. 8-12+
  • Punctuation Puzzler  Gr. 3-8
  • Reading Detective  Gr. 3-12+
  • Red Herring Mysteries  Gr. 4-12+
  • Red Herrings Science Mysteries  Gr. 4-9
  • Science Detective  Gr. 3-6
  • Science Mind Benders  Gr. PreK-3
  • Science Vocabulary Crossword Puzzles  Gr. 4-6
  • Sciencewise  Gr. 4-12+
  • Scratch Your Brain  Gr. 2-12+
  • Sentence Diagramming  Gr. 3-12+
  • Smarty Pants Puzzles  Gr. 3-12+
  • Snailopolis  Gr. K-4
  • Something's Fishy at Lake Iwannafisha  Gr. 5-9
  • Teaching Technology  Gr. 3-12+
  • Tell Me a Story  Gr. PreK-1
  • Think Analogies  Gr. 3-12+
  • Think and Write  Gr. 3-8
  • Think-A-Grams  Gr. 4-12+
  • Thinking About Time  Gr. 3-6
  • Thinking Connections  Gr. 4-12+
  • Thinking Directionally  Gr. 2-6
  • Thinking Skills & Key Concepts  Gr. PreK-2
  • Thinking Skills for Tests  Gr. PreK-5
  • U.S. History Detective  Gr. 8-12+
  • Understanding Fractions  Gr. 2-6
  • Visual Perceptual Skill Building  Gr. PreK-3
  • Vocabulary Riddles  Gr. 4-8
  • Vocabulary Smarts  Gr. 2-5
  • Vocabulary Virtuoso  Gr. 2-12+
  • What Would You Do?  Gr. 2-12+
  • Who Is This Kid? Colleges Want to Know!  Gr. 9-12+
  • Word Explorer  Gr. 6-8
  • Word Roots  Gr. 3-12+
  • World History Detective  Gr. 6-12+
  • Writing Detective  Gr. 3-6
  • You Decide!  Gr. 6-12+

inductive and deductive reasoning in critical thinking

  • Special of the Month
  • Sign Up for our Best Offers
  • Bundles = Greatest Savings!
  • Sign Up for Free Puzzles
  • Sign Up for Free Activities
  • Toddler (Ages 0-3)
  • PreK (Ages 3-5)
  • Kindergarten (Ages 5-6)
  • 1st Grade (Ages 6-7)
  • 2nd Grade (Ages 7-8)
  • 3rd Grade (Ages 8-9)
  • 4th Grade (Ages 9-10)
  • 5th Grade (Ages 10-11)
  • 6th Grade (Ages 11-12)
  • 7th Grade (Ages 12-13)
  • 8th Grade (Ages 13-14)
  • 9th Grade (Ages 14-15)
  • 10th Grade (Ages 15-16)
  • 11th Grade (Ages 16-17)
  • 12th Grade (Ages 17-18)
  • 12th+ Grade (Ages 18+)
  • Test Prep Directory
  • Test Prep Bundles
  • Test Prep Guides
  • Preschool Academics
  • Store Locator
  • Submit Feedback/Request
  • Sales Alerts Sign-Up
  • Technical Support
  • Mission & History
  • Articles & Advice
  • Testimonials
  • Our Guarantee
  • New Products
  • Free Activities
  • Libros en Español

Guide To Inductive & Deductive Reasoning

Induction vs. Deduction

October 15, 2008, by The Critical Thinking Co. Staff

Induction and deduction are pervasive elements in critical thinking. They are also somewhat misunderstood terms. Arguments based on experience or observation are best expressed inductively , while arguments based on laws or rules are best expressed deductively . Most arguments are mainly inductive. In fact, inductive reasoning usually comes much more naturally to us than deductive reasoning.

Inductive reasoning moves from specific details and observations (typically of nature) to the more general underlying principles or process that explains them (e.g., Newton's Law of Gravity). It is open-ended and exploratory, especially at the beginning. The premises of an inductive argument are believed to support the conclusion, but do not ensure it. Thus, the conclusion of an induction is regarded as a hypothesis. In the Inductive method, also called the scientific method , observation of nature is the authority.

In contrast, deductive reasoning typically moves from general truths to specific conclusions. It opens with an expansive explanation (statements known or believed to be true) and continues with predictions for specific observations supporting it. Deductive reasoning is narrow in nature and is concerned with testing or confirming a hypothesis. It is dependent on its premises. For example, a false premise can lead to a false result, and inconclusive premises will also yield an inconclusive conclusion. Deductive reasoning leads to a confirmation (or not) of our original theories. It guarantees the correctness of a conclusion. Logic is the authority in the deductive method.

If you can strengthen your argument or hypothesis by adding another piece of information, you are using inductive reasoning. If you cannot improve your argument by adding more evidence, you are employing deductive reasoning.

Deductive reasoning vs inductive reasoning: A comparison

Introduction.

Deductive reasoning and inductive reasoning are two important types of reasoning that play a crucial role in various aspects of life, from problem-solving to decision-making. Both types of reasoning involve drawing conclusions, but they differ in their approaches and processes.

Deductive reasoning is a type of logical thinking that starts with broad generalizations and narrows down to specific conclusions. It is based on the principle that if the initial premises are true and the reasoning is valid, then the conclusion must be true as well. In deductive reasoning, the conclusion is already contained within the premises.

On the other hand, inductive reasoning is a form of logical thinking that starts with specific observations or patterns and generalizes them into broader theories or hypotheses. It is based on the principle that if a specific observation or pattern holds true for a set of cases, it is likely to hold true for similar cases in the future. Inductive reasoning involves making inferences or predictions based on patterns or evidence.

Understanding both deductive and inductive reasoning is important because they complement each other and contribute to a well-rounded approach to critical thinking. Deductive reasoning enables us to make precise and accurate conclusions based on established principles and facts. It is particularly useful in fields such as mathematics and formal logic, where precision and certainty are paramount.

On the other hand, inductive reasoning allows us to make educated guesses or predictions based on incomplete information or patterns. It is commonly used in scientific research, where observations are made to form hypotheses and theories. Inductive reasoning helps us make sense of the world by allowing us to draw conclusions from incomplete or uncertain information.

In conclusion, deductive reasoning and inductive reasoning are two essential types of reasoning that have different approaches and processes. While deductive reasoning allows us to make precise conclusions based on established principles, inductive reasoning enables us to make educated guesses or predictions based on observations and patterns. Both types of reasoning have their strengths and weaknesses, and understanding both is crucial for effective critical thinking and decision-making.

Deductive reasoning

Deductive reasoning is a type of logical thinking that involves reaching conclusions based on established principles or premises. It is a top-down approach, where the conclusions are derived from general statements or theories.

In deductive reasoning, the conclusions made must be true if the premises are true. This is because deductive reasoning relies on the principle of validity, which means that the conclusion logically follows from the premises. If the premises are true, the conclusion must also be true.

Examples of deductive reasoning

An example of deductive reasoning is:

  • All birds have feathers. (Premise 1)
  • Swans are birds. (Premise 2)
  • Therefore, swans have feathers. (Conclusion)

In this example, the conclusion is deduced from the premises. If the premises are true (birds have feathers and swans are birds), then the conclusion (swans have feathers) must also be true.

Another example of deductive reasoning is:

  • All mammals are warm-blooded. (Premise 1)
  • Dogs are mammals. (Premise 2)
  • Therefore, dogs are warm-blooded. (Conclusion)

Again, if the premises are true (mammals are warm-blooded and dogs are mammals), then the conclusion (dogs are warm-blooded) must also be true.

Process of deductive reasoning

The process of deductive reasoning involves:

  • Starting with general statements or principles (known as premises) that are assumed to be true.
  • Using logical thinking to draw specific conclusions based on those premises.
  • Ensuring that the conclusions are valid and follow logically from the premises.

Strengths and weaknesses of deductive reasoning

  • Strengths : One of the main strengths of deductive reasoning is that it guarantees the truth of the conclusions if the premises are true. This makes deductive reasoning a powerful tool for reaching accurate and certain conclusions. It is also a systematic and organized way of analyzing information.
  • Weaknesses : However, deductive reasoning can be limited by the accuracy and truthfulness of the premises. If the premises are false or inaccurate, the conclusions reached through deductive reasoning may also be false. Additionally, deductive reasoning may not be suitable for situations where complete certainty is not possible or when dealing with ambiguous or uncertain information.

Inductive Reasoning

Inductive reasoning is a type of reasoning that involves drawing general conclusions based on specific observations or evidence. It is a bottom-up approach where specific instances or examples are used to make generalizations or predictions about an entire population or future outcomes. Inductive reasoning is often used in scientific research, data analysis, and everyday reasoning.

Examples of Inductive Reasoning

  • Observation: Every time I go outside, I see that the grass is wet.
  • Conclusion: Therefore, I conclude that it has rained.
  • Observation: I have noticed that every time I eat strawberries, I get an allergic reaction.
  • Conclusion: Hence, I infer that I am allergic to strawberries.

In both examples, the observations made in specific instances are used to make general conclusions about a broader phenomenon or situation.

Process of Inductive Reasoning

The process of inductive reasoning involves several steps:

Making observations: This step involves collecting data or evidence from specific instances or cases.

Finding patterns: In this step, the collected data is examined to identify any recurring patterns or correlations.

Making a generalization: Based on the observed patterns, a general conclusion or hypothesis is formed.

Testing the conclusion: The general conclusion is then tested using additional evidence or observations to assess its accuracy or validity.

Drawing the final conclusion: Based on the results of the testing, a final conclusion is drawn, which may be revised or refined as more evidence becomes available.

Strengths of Inductive Reasoning

  • Inductive reasoning allows us to explore new ideas and generate hypotheses based on observed patterns or evidence.
  • It is often used in scientific research to make predictions and formulate theories.
  • Inductive reasoning is flexible and adaptable, allowing for the consideration of multiple perspectives and the incorporation of new information.

Weaknesses of Inductive Reasoning

  • Inductive reasoning does not guarantee the certainty or accuracy of the conclusions drawn. The conclusions are based on probabilities rather than definite truths.
  • The observations made in specific instances may not always be representative of the entire population or future outcomes.
  • Inductive reasoning can be influenced by personal biases or limited data, leading to faulty generalizations or predictions.

In summary, inductive reasoning is a valuable tool for making generalizations and predictions based on specific observations or evidence. It allows us to explore new ideas and generate hypotheses, although its conclusions are probabilistic rather than definitive. Understanding the process and strengths and weaknesses of inductive reasoning can help us critically evaluate information and make informed decisions.

Differences between deductive and inductive reasoning

Definition of deduction and induction.

Deductive reasoning is a type of logical reasoning that starts with general premises and uses them to derive specific conclusions. It follows a top-down approach, where the conclusions drawn are guaranteed to be true if the premises are true.

Inductive reasoning , on the other hand, is a type of logical reasoning that starts with specific observations or data and uses them to formulate a general conclusion or theory. It follows a bottom-up approach, where the conclusions drawn are probabilistic and are based on the evidence collected.

Key characteristics of deductive reasoning

  • Validity : Deductive reasoning is concerned with the logical validity of an argument. If the premises of a deductive argument are true and the argument is valid, then the conclusion must also be true.
  • Certainty : Deductive reasoning provides certain conclusions. It allows for a definitive level of certainty, as the conclusions are derived from established premises using logical rules.
  • General to specific : Deductive reasoning moves from general statements to specific conclusions. It starts with universal principles or rules and applies them to specific cases.

Key characteristics of inductive reasoning

  • Probability : Inductive reasoning deals with probability rather than certainty. The conclusions drawn from inductive reasoning are probabilistic and based on the evidence or observations collected.
  • Specific to general : Inductive reasoning moves from specific observations or data to general conclusions. It gathers evidence from individual cases and generalizes them to formulate a theory or general statement.
  • Ampliative reasoning : Inductive reasoning is considered an ampliative form of reasoning. It extends our knowledge beyond what is stated in the premises and can potentially lead to new discoveries or theories.

Different approaches to drawing conclusions

Deductive reasoning follows a deductive or syllogistic approach, where the conclusions are necessarily true if the premises are true. A deductive argument can be valid or invalid and sound or unsound. If an argument is valid and has true premises, it is considered sound, and its conclusion is guaranteed to be true.

Inductive reasoning, on the other hand, follows an inductive or probabilistic approach, where the conclusions are based on the evidence or observations collected. An inductive argument can be strong or weak and cogent or uncogent. A strong argument has premises that make the conclusion more likely to be true, while a weak argument does not provide strong evidence for the conclusion. A cogent argument is both strong and has true premises, while an uncogent argument is either weak or has false premises.

In deductive reasoning, the truth of the premises guarantees the truth of the conclusion, whereas in inductive reasoning, the truth of the premises only makes the conclusion probable. Deductive reasoning aims for certainty, while inductive reasoning aims for probability and generalizability.

Similarities between deductive and inductive reasoning

Both forms of reasoning aim to draw conclusions.

Both deductive and inductive reasoning are used to draw conclusions based on available information. They both involve logical thinking and attempt to make sense of the world through the process of reasoning.

Shared goal of understanding the world through logical thinking

Both deductive and inductive reasoning have the objective of understanding the world and making sense of it through logical thinking. They seek to analyze and interpret evidence, observations, or premises in order to arrive at a logical conclusion.

Use of evidence and observations in both types of reasoning

Both deductive and inductive reasoning rely on evidence and observations to support their conclusions. In deductive reasoning, evidence is used to validate or invalidate the initial premise, while in inductive reasoning, observations and evidence are used to form a general pattern or theory.

Application in scientific research

Both deductive and inductive reasoning are utilized in scientific research to generate knowledge and test hypotheses. Deductive reasoning is often used to formulate specific predictions or hypotheses, while inductive reasoning is employed to analyze collected data and generalize the findings to draw broader conclusions.

Need for critical thinking

Both deductive and inductive reasoning require critical thinking skills. They demand careful evaluation of logical validity, consideration of alternative explanations, and the ability to identify logical fallacies or biases. Critical thinking is essential in both types of reasoning to ensure the conclusions drawn are robust and reliable.

Subject to limitations and potential errors

Both deductive and inductive reasoning are not foolproof and can be subject to limitations and potential errors. Deductive reasoning may be affected by the accuracy and validity of the initial premise, while inductive reasoning may be influenced by the representativeness and reliability of the observations or evidence used.

Continuous refinement and modification

Both deductive and inductive reasoning are dynamic and subject to continuous refinement and modification. As new evidence or observations emerge, both forms of reasoning can be revisited and adjusted accordingly. This iterative process helps to improve the accuracy and reliability of the conclusions.

Complementary nature

Deductive and inductive reasoning are not mutually exclusive; rather, they complement each other in many situations. Deductive reasoning provides a logical framework for determining the validity of an argument, while inductive reasoning helps to generate hypotheses and form generalizations based on observed patterns. The combination of both forms of reasoning allows for a more comprehensive and nuanced understanding of complex phenomena.

In conclusion, deductive reasoning and inductive reasoning are two different but complementary approaches to understanding the world through logical thinking.

Deductive reasoning is a top-down approach that starts with general principles and applies them to specific cases. It involves making logical deductions based on established premises and rules. Deductive reasoning is often used in mathematics, formal logic, and puzzles. It is characterized by its certainty and the ability to reach a definite conclusion if the premises are true. However, it is also limited by the fact that if the premises are false or incorrect, the conclusion will be invalid.

On the other hand, inductive reasoning is a bottom-up approach that involves making generalizations based on specific observations. It starts with specific examples or evidence and works towards broader conclusions. Inductive reasoning is used in scientific research, data analysis, and everyday decision-making. It is characterized by its openness to uncertainty and the fact that conclusions are never certain but only probable. While inductive reasoning allows for creativity and discovery, it is also prone to biases and errors due to the reliance on limited observations.

Despite their differences, deductive and inductive reasoning share common goals. Both approaches aim to draw conclusions and make sense of the world around us. They also rely on evidence and observations, although deductive reasoning relies more on established principles while inductive reasoning focuses more on empirical data. Both forms of reasoning are essential and have their own strengths and weaknesses.

It is crucial to remember that different situations call for different types of reasoning. Deductive reasoning is particularly useful in situations where certainty and validity are paramount, such as mathematical proofs or legal arguments. On the other hand, inductive reasoning is valuable when dealing with uncertain and complex situations, like scientific research or predicting future trends.

In summary, understanding both deductive and inductive reasoning is important for developing critical thinking skills and making informed decisions. By recognizing their similarities and differences, we can apply the appropriate type of reasoning to various situations, thereby enhancing our logical thinking abilities and gaining a deeper understanding of the world. Deductive and inductive reasoning, when used together effectively, can help us navigate the complexities of the world and make better judgments.

The Power of Storytelling in Critical Analysis

5 surprising benefits of mental flexibility, debunking common myths about deductive reasoning, 5 ways to sharpen deductive logic muscle.

The role of emotions in deductive reasoning

The role of emotions in deductive reasoning

Middle Way Society

An ethical approach to a better life, by integrating desires and avoiding dogmatic extremes, critical thinking 2: induction and deduction.

There was a great response to my first post in this series, so thanks to everyone who contributed to that. For my second one I’ve decided to tackle an issue that’s quite crucial to how Critical Thinking relates to the Middle Way.

There are two types of argument, normally known as induction and deduction. Deductive argument is what is classically formulated as argument and abstracted into logic. Deductive argument begins with assumed claims known as premises, and draws a conclusion from those premises. Within the terms of interpretation and the beliefs it assumes, the logical link between the premises and conclusion in deductive argument is absolute. If the premises are correct, then the conclusion must be correct if the reasoning is valid (i.e. follows the laws of logic).

For example:

All engineers are human beings.

So John must be a human being.

This valid argument must be correct if John is indeed an engineer, and if indeed all engineers are human beings. But I might have false information about John, and robotic engineers may be under development right now in Japanese laboratories. My assumptions may be false, but if they happen to be true then the conclusion must also be true.

That’s what we’re traditionally told about deductive argument. However, if you take into account embodied meaning, then we are all also going to have slightly different meanings for ‘John’, ‘engineer’, ‘human being’ etc. experienced in different bodies. It’s only if we further assume that these meanings are sufficiently shared and stable that a deductive argument can be ‘valid’ in this way.

The other type of argument, however, is far more common and far more useful. Inductive argument is imperfect argument that begins with limited information and draws further conclusions from it. Sometimes such reasoning can sometimes be grossly over-stated and prejudiced. For example:

My neighbour was burgled by a man from Norfolk.

Norfolk people are all criminals.

This is a crude example of prejudiced reasoning. It takes just one example and draws a conclusion about a whole class of people that are extremely varied. It’s obviously not taking into account all the reasons why it might be wrong.

However, inductive reasoning is what we rely upon constantly. If we take into account its limitations it can be a justified way of reasoning. For example:

Tom has failed to fulfil his promises to his mother on ten successive occasions within a month.

His mother should not place any further reliance on Tom’s promises.

It’s still possible that Tom is trustworthy, and has just had ten unfortunate sets of events that stopped him fulfilling his promises, or that he was at fault in the past but has now completely reformed. However, it’s unlikely. We are usually obliged to trust the weight of our experience in cases like this. We could have confidence in the conclusion here as long as we also took into account the possibility that we could be wrong.

So, my argument here is that deductive argument, though perfect in theory, is not really different from inductive in the way we should treat it in practice. Deductive logic, though it may be perfect in the abstract, is in practice dependent both on the assumptions made and on the interpretation of the words of the argument to be applied in our lives. In practice, then deductive reasoning is just as fallible as inductive. If we use inductive reasoning with awareness of its limitations, though, it can provide us with justified beliefs. This kind of reasoning is justified because of its fallibility, not in spite of it.

How well justified are the following (inductive or deductive) arguments?

1. The no. 37 bus that I take to work has been more than ten minutes late on three successive days. This is an unacceptable standard of punctuality, and it has made me late for work. I shall write to the bus company to complain.

2. If the UK economy continues to grow at the rate it managed during the last quarter of 2013 (a rate unmatched since the crash of 2008) then we can conclude that economic recovery is well under way.

3. Romanians in the UK are arrested at seven times the rate of Britons ( Daily Mail ), so if more Romanians come to the UK we can expect an increase in crime.

4. If God exists and God is good, he would not allow people to suffer without making the truth available to them. So he must have sent divinely-inspired prophets to guide people.

5. I’ve been practising this difficult piano piece for two months now, but I seem to be making no progress. I keep making the same mistakes. I should give up this piece and learn something else.

Picture: Engineer at work by Angelsman (Wikimedia Commons)

5 thoughts on “ Critical Thinking 2: Induction and deduction ”

I have made an attempt to answer the questions. 1. An inductive argument, not well justified. 2. Also inductive reasoning, probably justified. 3. Deductive logic (an absolute claim) not well justified. No matter what theorythe Daily Mail supports 4. Also deductive and not well jusified. 5. Inductive argument which is not well jsutified, hard peices take longer to learn. I’ll post this now, but will have another think about my answers.

I’m going to rate the justification of each argument from 0 (not justified at all) to 10 (as justifiable as is possible).

1. Rate = 4. This seems fairly justified, but three times in a row does not seem excessive and might not indicate a systemic problem with the bus company. It is quite probable that three separate and unavoidable random events took place, causing the bus to be late each time.

2. Rate = 6. I found this difficult. The argument only states that the economy continues to grow, but not for how long. It also says that the recovery is well under way, rather than just under way. Either way, since it has not grown as much since 2008, if it continues to then it is reasonable to claim that a recovery is well under way.

3. Rate = 4. This one is challenging. It does not take into account the possibility of discrimination by the police, or the fact that Romanians might not be as good as evading arrest. However, I can understand how – presented as it is – this conclusion can be reached.

4. Rate = 1. This one scores low because the conclusion seems quite wild – it assumes that revelation of the truth can only be achieved by God, via prophets.

5. Rate = 3. It would be reasonable to wish to give up but making the same mistake for two months does not mean that you will continue to make this mistake, and the best way to enable oneself to overcome this difficulty is to practice.

🙁 I feel a bit upset that I have given my highest score to the daily mail example, but I suppose this is an emotional, rather than logical response.

Hi Robert, 1. Using Rich’s 1-10 ratings I would say this is fairly justified: Rating 6 out of 10. There are arguably more mitigating circumstance for a bus to be late than say a train, such as heavy early morning traffic, passengers searching for change while paying etc. One should also take into account to what degree the bus service guarantees punctuality. However, given the fact that the person uses the present simple tense to express habitual action (the bus I take to work), I feel one can make the reasonable assumption that the person normally relies on this service rather than other forms of transport to get them to work on time and that reliability has to be backed up by a certain degree of confidence (presumably base on experience) in the punctuality of the service, otherwise they would choose another mode of transport. My degree of justification in this example has also been influenced by the fairly punctual timetable that buses operate to in my neck of the woods.

2. I found this a difficult one as I don’t have a good grasp of economics and to what degree of confidence one can have in an upward economic trend continuing and for what period. You’ve mentioned that Nicholas Taleb in his book “Antifragile” suggests that , economists and market analysts often put far too much faith in their economic predictions that don’t take sufficient account of the complexity of market forces and other unpredictable influencing factors . Also, why should it be a given that the UK will remain one of the top world economies, especially if you take into account the rise of economies such as India, Brazil, Russia etc. So I think I’ll giving this a rating of 4.

3. At first glance this argument appears strong in a deductive sense. However, one would need to question what these Romanians were arrested for eg. maybe for being illegal immigrants? – which now would be no longer the case, given the new EU Law. Also, what size of control group did they use, 7 people out of 10, or 70,000 out of 100,000 and where did they take that control group from and what deciding factors did they use? Arguably, in any argument as well, one needs to take into account to some extent not just what is being judged, but also the judger. Does the Daily Mail place great importance in providing their readers with a balanced view when putting forward an argument? Would the Daily Mail possibly have a vested interest in wanting their readers to feel that 70% of Romanians living this country are criminals? Is this me employing an Ad Hominim argument though? Rating 2 out of 10

4. Another example of a seductive deductive argument, if the premises are true then the conclusion must be true. However, I think what’s important is how well the premises stand up. It’s a bit like saying, If all polar bears could play the piano, and were classically trained then at some stage in the training they would most likely learn “Fur Elise” by Beethoven. I think I can say with a strong degree of justification that there is no way of knowing whether God exists or not (given that concept is not subject to experience), so any hypothesizing about it is arguably irrelevant (unless one is playing safe of course). Rating 0 out of 10

5. I feel this is fairly well justified and give it a rating of 7 out of 10. Admittedly, the piano player could conceivably be giving up too early as Norma and Rich point out . However, given the fact that they are attempting a difficult piece (‘difficult’ is also relative of course), one could make the reasonable assumption that they are fairly proficient and would not have reached that degree of proficiency without plenty of practice and some experience of when to accept that some pieces at their stage of learning are too challenging (and therefore potentially demotivating) for them. I feel it is quite reasonable to assume that they have reached that point here and are making a pragmatic decision to move on.

‘I feel a bit upset that I have given my highest score to the daily mail example, but I suppose this is an emotional, rather than logical response’

Interestingly, this is a false statement. I believed it at the time of writing my last post (above), but having read my answer again I can see that I gave a rating of 6 to the economy argument and only 4 to the daily mail argument. It is amazing how quickly a false memory can occur. Giving a 4 to the daily mail argument was enough to disturb me into this error!!

Whether the arguments are inductive or deductive is rather less open to argument here than how well justified they are! Part of the point of the exercise, though, is to think about the different ways that inductive and deductive arguments are justified. These are my answers:

1. This is an inductive argument, generalising from three occasions. If you compare it with generalisation from three examples elsewhere, it becomes clearer how weak this is. For example, if I went to a business conference and met three Russian businessmen, and concluded from this experience that all Russians were businessmen. I think this argument is weaker than Rich or Barry are recognising.

2. This is a deductive argument. It is entirely hypothetical, stating that *if* one thing is the case then the other must be. The two things are “the economy continues to grow at the same rate it managed in the last quarter of 2013” and “economic recovery is well under way”. This seems to me obviously the case. No assumptions are being made about whether the recovery will continue – the hypothetical event is just being re-described. This can be seen as a valid deductive argument, regardless of whether you know anything about economics, and regardless of whether you agree with the hypothesis.

3. You could interpret this either inductively or deductively. Inductively, it would be arguing from a higher rate of arrests amongst Romanians already in the UK to an overall increase in crime if further Romanians arrive in future. If you take it this way it is weak, because the new Romanians might be quite different from the current Romanians. The rate of arrests amongst current Romanians compared to the rest of the population also tells you nothing about how many arrests have been made (there might be a very small number), so even if it was true (as no doubt many Daily Mail readers assume) that Romanians in general are 7 times more inclined to criminality than average, the impact on crime rates in general might still be negligible because the number of Romanians is very small when compared to the numbers in the general population.

If you take this as a deductive argument, it’s even worse. It certainly doesn’t follow from the arrest rates of current Romanians that there will necessarily be an increase in crime of future Romanians.

4. This is a deductive argument that seems entirely valid. As Barry points out, however, it’s only of any relevance if you accept the assumptions it begins with.

5. This is inductive. I was interested in the variety of responses this got, as this is a real dilemma that I have encountered as an amateur pianist! How strong it is depends on other assumptions, I think, such as how much you really want to learn the piece, particularly when compared to others that you could more realistically learn. In general (on the basis of individual experience) I’m inclined to agree with Barry that it’s quite a strong argument (though not overwhelmingly so). If I’ve made little progress with a piece in two months then I may be ‘flogging a dead horse’ to continue.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Inductive vs Deductive Research Approach (with Examples)

Inductive vs Deductive Reasoning | Difference & Examples

Published on 4 May 2022 by Raimo Streefkerk . Revised on 10 October 2022.

The main difference between inductive and deductive reasoning is that inductive reasoning aims at developing a theory while deductive reasoning aims at testing an existing theory .

Inductive reasoning moves from specific observations to broad generalisations , and deductive reasoning the other way around.

Both approaches are used in various types of research , and it’s not uncommon to combine them in one large study.

Inductive-vs-deductive-reasoning

Table of contents

Inductive research approach, deductive research approach, combining inductive and deductive research, frequently asked questions about inductive vs deductive reasoning.

When there is little to no existing literature on a topic, it is common to perform inductive research because there is no theory to test. The inductive approach consists of three stages:

  • A low-cost airline flight is delayed
  • Dogs A and B have fleas
  • Elephants depend on water to exist
  • Another 20 flights from low-cost airlines are delayed
  • All observed dogs have fleas
  • All observed animals depend on water to exist
  • Low-cost airlines always have delays
  • All dogs have fleas
  • All biological life depends on water to exist

Limitations of an inductive approach

A conclusion drawn on the basis of an inductive method can never be proven, but it can be invalidated.

Example You observe 1,000 flights from low-cost airlines. All of them experience a delay, which is in line with your theory. However, you can never prove that flight 1,001 will also be delayed. Still, the larger your dataset, the more reliable the conclusion.

Prevent plagiarism, run a free check.

When conducting deductive research , you always start with a theory (the result of inductive research). Reasoning deductively means testing these theories. If there is no theory yet, you cannot conduct deductive research.

The deductive research approach consists of four stages:

  • If passengers fly with a low-cost airline, then they will always experience delays
  • All pet dogs in my apartment building have fleas
  • All land mammals depend on water to exist
  • Collect flight data of low-cost airlines
  • Test all dogs in the building for fleas
  • Study all land mammal species to see if they depend on water
  • 5 out of 100 flights of low-cost airlines are not delayed
  • 10 out of 20 dogs didn’t have fleas
  • All land mammal species depend on water
  • 5 out of 100 flights of low-cost airlines are not delayed = reject hypothesis
  • 10 out of 20 dogs didn’t have fleas = reject hypothesis
  • All land mammal species depend on water = support hypothesis

Limitations of a deductive approach

The conclusions of deductive reasoning can only be true if all the premises set in the inductive study are true and the terms are clear.

  • All dogs have fleas (premise)
  • Benno is a dog (premise)
  • Benno has fleas (conclusion)

Many scientists conducting a larger research project begin with an inductive study (developing a theory). The inductive study is followed up with deductive research to confirm or invalidate the conclusion.

In the examples above, the conclusion (theory) of the inductive study is also used as a starting point for the deductive study.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Streefkerk, R. (2022, October 10). Inductive vs Deductive Reasoning | Difference & Examples. Scribbr. Retrieved 6 May 2024, from https://www.scribbr.co.uk/research-methods/inductive-vs-deductive-reasoning/

Is this article helpful?

Raimo Streefkerk

Raimo Streefkerk

Other students also liked, inductive reasoning | types, examples, explanation, what is deductive reasoning | explanation & examples, a quick guide to experimental design | 5 steps & examples.

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Humanities LibreTexts

1.8: Deductive vs. Inductive Arguments

  • Last updated
  • Save as PDF
  • Page ID 27899

  • Matthew Van Cleave
  • Lansing Community College

The concepts of validity and soundness that we have introduced apply only to the class of what are called “deductive arguments”. A deductive argument is an argument whose conclusion is supposed to follow from its premises with absolute certainty, thus leaving no possibility that the conclusion doesn’t follow from the premises. For a deductive argument to fail to do this is for it to fail as a deductive argument. In contrast, an inductive argument is an argument whose conclusion is supposed to follow from its premises with a high level of probability, which means that although it is possible that the conclusion doesn’t follow from its premises, it is unlikely that this is the case. Here is an example of an inductive argument:

Tweets is a healthy, normally functioning bird and since most healthy, normally functioning birds fly, Tweets probably flies.

Notice that the conclusion, Tweets probably flies, contains the word “probably.” This is a clear indicator that the argument is supposed to be inductive, not deductive. Here is the argument in standard form:

  • Tweets is a healthy, normally functioning bird
  • Most healthy, normally functioning birds fly
  • Therefore, Tweets probably flies

Given the information provided by the premises, the conclusion does seem to be well supported. That is, the premises do give us a strong reason for accepting the conclusion. This is true even though we can imagine a scenario in which the premises are true and yet the conclusion is false. For example, suppose that we added the following premise:

Tweets is 6 ft tall and can run 30 mph.

Were we to add that premise, the conclusion would no longer be supported by the premises, since any bird that is 6 ft tall and can run 30 mph, is not a kind of bird that can fly. That information leads us to believe that Tweets is an ostrich or emu, which are not kinds of birds that can fly. As this example shows, inductive arguments are defeasible arguments since by adding further information or premises to the argument, we can overturn (defeat) the verdict that the conclusion is well-supported by the premises. Inductive arguments whose premises give us a strong, even if defeasible, reason for accepting the conclusion are called, unsurprisingly, strong inductive arguments . In contrast, an inductive argument that does not provide a strong reason for accepting the conclusion are called weak inductive arguments.

Whereas strong inductive arguments are defeasible, valid deductive arguments aren’t. Suppose that instead of saying that most birds fly, premise 2 said that all birds fly.

  • Tweets is a healthy, normally function bird.
  • All healthy, normally functioning birds can fly.
  • Therefore, Tweets can fly.

This is a valid argument and since it is a valid argument, there are no further premises that we could add that could overturn the argument’s validity. (True, premise 2 is false, but as we’ve seen that is irrelevant to determining whether an argument is valid.) Even if we were to add the premise that Tweets is 6 ft tall and can run 30 mph, it doesn’t overturn the validity of the argument. As soon as we use the universal generalization , “all healthy, normally function birds can fly,” then when we assume that premise is true and add that Tweets is a healthy, normally functioning bird, it has to follow from those premises that Tweets can fly. This is true even if we add that Tweets is 6 ft tall because then what we have to imagine (in applying our informal test of validity) is a world in which all birds, including those that are 6 ft tall and can run 30 mph, can fly. Although inductive arguments are an important class of argument that are commonly used every day in many contexts, logic texts tend not to spend as much time with them since we have no agreed upon standard of evaluating them. In contrast, there is an agreed upon standard of evaluation of deductive arguments. We have already seen what that is; it is the concept of validity. In chapter 2 we will learn some precise, formal methods of evaluating deductive arguments. There are no such agreed upon formal methods of evaluation for inductive arguments. This is an area of ongoing research in philosophy. In chapter 3 we will revisit inductive arguments and consider some ways to evaluate inductive arguments.

The Berkeley Well-Being Institute

  • All Access Pass
  • PLR Articles
  • PLR Courses
  • PLR Social Media

Grab Our Free eBook to Learn How to Grow Your Wellness Business Exponentially!

What is inductive reasoning (a definition), video: deduction vs. induction (deductive/inductive reasoning).

Why Is Inductive Reasoning Important?

  • Identifying patterns and making predictions: By observing patterns and trends in specific instances, we can form general conclusions and predictions about the world (Hayes et al., 2010). This allows us to make informed decisions in everyday situations, like expecting rain after seeing dark clouds.
  • Generalization: Inductive reasoning allows us to make generalizations based on specific observations or examples. By identifying patterns or trends in specific instances, we can infer broader principles or rules that apply to a larger set of situations (Heit, 2000).
  • Scientific method: Inductive reasoning plays a crucial role in scientific discovery. Scientists use observations and data to form hypotheses and theories, which they then test through experimentation. This cycle of observation, induction, and testing is essential for advancing scientific knowledge.
  • Critical thinking: Inductive reasoning requires you to analyze evidence, identify weaknesses, and consider alternative explanations. This helps you develop critical thinking skills, which are essential for making sound judgments (Shin, 2019).
  • Problem-solving: In many problem-solving scenarios, especially those with incomplete information, inductive reasoning allows us to draw reasonable conclusions based on the available evidence. It helps us make informed decisions or solve complex problems (Heit, 2000; Shin, 2019).
  • Everyday decisions: We constantly make decisions based on incomplete information and past experiences. Inductive reasoning allows us to use what we know to make informed predictions about the future, even when we can't be absolutely certain.
  • Creativity : By identifying patterns and making connections between seemingly unrelated things, inductive reasoning can lead to new ideas and innovations.
  • Adaptability and learning: The world around us is constantly changing, and inductive reasoning allows us to adapt our understanding based on new information. This is essential for learning and growth (Heit, 2000).

well-being business website

History of Inductive Reasoning

Examples of inductive reasoning.

  • Every time I eat strawberries, I get hives. So I must be allergic to strawberries. (This is an observation of a pattern leading to a possible explanation.)
  • I stopped drinking coffee and now I have headaches. I’m having caffeine withdrawal. (This is another observation of a pattern.)
  • I see many people wearing jackets today. It must be cold outside. (This relies on the assumption that most people dress according to the weather.)
  • The traffic is always heavy on Friday afternoons. Today is Friday afternoon, so the traffic will probably be heavy. (Past experience informs a prediction about the future.)
  • Researchers observe that plants grow taller when exposed to sunlight. They hypothesize that sunlight has a positive effect on plant growth. (This is the first step in the scientific process, where observations lead to hypotheses.)
  • A medical study finds that a new drug is effective in treating a disease in a group of patients. Researchers inductively conclude that the drug may be effective for a larger population. (Specific results lead to a broader generalization.)
  • Astronomers observe patterns in the movement of stars and galaxies, leading them to theorize about the existence of dark matter. (This is based on indirect evidence, as dark matter cannot be directly observed.)

Video: What Is Inductive Reasoning​

Types of Inductive Reasoning

  • Generalization: This is the most common method, where you observe a pattern in a sample and extend it to the entire population. The “all swans are white” example mentioned earlier fits into this category.
  • Statistical generalization: Similar to generalization, this uses statistical data to support the conclusion. For instance, finding 95% of people surveyed prefer chocolate ice cream might lead you to say, "Most people prefer chocolate ice cream." This is more reliable than simple generalization but still has limitations due to sample bias and margin of error.
  • Causal reasoning: This method seeks to identify cause-and-effect relationships. Observing that plants grow faster with fertilizer might lead you to the conclusion, "Fertilizer causes plants to grow faster." However, correlation doesn't always equal causation, and other factors could be influencing the growth.
  • Sign reasoning: This method relies on identifying signs or indicators. Seeing dark clouds might lead you to think, "It will rain soon." While helpful, it's not foolproof. Other factors could influence the weather, and the sign might not always be accurate.
  • Analogical reasoning: This method draws comparisons between similar situations. Comparing the human brain to a computer might lead you to conclude that the brain processes information like a computer. This can be insightful, but it requires careful consideration of the differences between the two entities.

Well-Being PLR Courses - Grow Your Business Fast

Psychology of Inductive Reasoning

All-Access Pass - Wellness PLR Content Collection

Inductive Reasoning in Math

Inductive reasoning in qualitative research, inductive reasoning in science.

  • Formulate hypotheses: Based on repeated observations of a phenomenon, scientists propose a tentative explanation.
  • Develop theories: As many experiments and observations support a hypothesis, it gains strength and can evolve into a theory, a well-tested and widely accepted explanation for a phenomenon.
  • Make predictions: Theories allow scientists to predict how things will behave under different conditions. These predictions are then tested through further experiments.
  • Developing the theory of evolution: Observing diverse life forms and their adaptations led Darwin to propose natural selection as a mechanism for change.
  • Predicting the properties of new elements: Based on the periodic table, chemists can predict the behavior of undiscovered elements based on their position.
  • Testing the effectiveness of a new drug: By analyzing results from clinical trials, researchers can infer the drug's potential benefits and risks.

​Methods of Inductive Reasoning

Inductive reasoning examples in literature.

  • ​ Sherlock Holmes stories by Arthur Conan Doyle: In many Sherlock Holmes stories, the famous detective uses inductive reasoning to solve cases. He gathers specific details and observations from crime scenes and then draws general conclusions to deduce the identity of the culprit. ​
  • Scout Finch in "To Kill a Mockingbird" by Harper Lee: Scout, the young protagonist of "To Kill a Mockingbird," uses her observations of the adult world to form her own understanding of justice and prejudice. She notices the unfair treatment of Tom Robinson, a black man falsely accused of a crime, and gradually develops her own sense of right and wrong.
  • Harry Potter in the "Harry Potter" series by J.K. Rowling: Harry Potter frequently uses inductive reasoning to solve mysteries and navigate the dangers of the wizarding world. He observes suspicious behavior, connects seemingly unrelated clues, and draws conclusions about the motives and actions of others. In "Harry Potter and the Chamber of Secrets," he infers that Professor Snape is attempting to steal a hidden object based on past encounters and Snape's unusual behavior.

Inductive Reasoning Fallacy

  • Hasty generalization: This fallacy occurs when a general conclusion is drawn from a small or unrepresentative sample of data. For example, someone might say, "I met two rude people from New York City, so all New Yorkers must be rude." This is a hasty generalization because it ignores the fact that there are millions of people in New York City, and it is impossible to make an accurate judgment about all of them based on the experience of meeting just two.
  • False analogy: This fallacy occurs when a comparison is made between two things that are not truly similar, and a conclusion is drawn based on that false similarity. For example, someone might say, "The brain is like a computer, so thinking must be like a computer program." This is a false analogy because brains and computers are fundamentally different systems, and the way they process information is not directly comparable.
  • False cause: This fallacy occurs when a conclusion is drawn based on the assumption that because one event happened after another, the first event caused the second event. For example, someone might say, "I took a vitamin C supplement and I didn't get a cold, so vitamin C prevents colds." This fallacy ignores the possibility that other factors may have been at play.
  • Slippery slope: This fallacy occurs when a series of small steps are presented as leading to a disastrous outcome, often without sufficient evidence to support the claim. For example, someone might say, "If we allow same-sex marriage, then next they'll want to legalize polygamy, and then bestiality!"

Well-Being PLR Article Packages - Grow Your Business Fast

Articles Related to  Inductive Reasoning

  • Personality Tests: Definition, Examples, & Psychology
  • Fluid Intelligence: Definition, Examples, & Psychology
  • Wisdom: Definition, Benefits, & Quotes ​​ ​​ ​​ ​​​​ ​​ ​

Books Related to Inductive Reasoning

  • Decisions We Make: How To Figure Things Out: Inductive Reasoning versus Deductive Reasoning (Advice & How To Book 1)
  • Parenting Teenage Girls: How to Use Inductive Reasoning
  • The Second Best Book of Sudoku Strategy: Using inductive reasoning to solve difficult puzzles mentally

Final Thoughts on Inductive Reasoning

Don't forget to grab our free ebook to learn how to grow your wellness business exponentially.

  • Babcock, L., & Vallesi, A. (2015). The interaction of process and domain in prefrontal cortex during inductive reasoning. Neuropsychologia , 67 , 91–99.
  • Hayes, B. K., Heit, E., & Swendsen, H. (2010). Inductive reasoning. Wiley Interdisciplinary Reviews: Cognitive Science , 1 (2), 278–292.
  • Heit, E. (2000). Properties of inductive reasoning. Psychonomic Bulletin & Review , 7 , 569–592.
  • Mattson, M. P. (2014). Superior pattern processing is the essence of the evolved human brain. Frontiers in Neuroscience , 8 (8), 265.
  • Miller, C. (2020, August 1). Inductive reasoning. Exploring communication in the real world . Pressbooks. https://cod.pressbooks.pub/communication/chapter/20-2-inductive-reasoning/ 
  • Noaparast, K. B., Niknam, Z., & Noaparast, M. Z. B. (2011). The sophisticated inductive approach and science education. Procedia-Social and Behavioral Sciences , 30 , 1365–1369.
  • Shin, H. S. (2019). Reasoning processes in clinical reasoning: from the perspective of cognitive psychology. Korean Journal of Medical Education , 31 (4), 299.
  • Tenny, S. (2022, September 18). Qualitative study . StatPearls [Internet]. https://www.ncbi.nlm.nih.gov/books/NBK470395/ 
  • The Decision Lab. (n.d.). Inductive reasoning . https://thedecisionlab.com/reference-guide/philosophy/inductive-reasoning 
  • University of Minnesota. (2016, September 29). Persuasive reasoning and fallacies. Communication in the Real World . https://open.lib.umn.edu/communication/chapter/11-3-persuasive-reasoning-and-fallacies/ ​
  • Happiness ​
  • Stress Management
  • Self-Confidence
  • Manifestation
  • ​ All Articles...
  • All-Access Pass​
  • ​​PLR Content Packages
  • PLR Courses ​

Jessie Ball duPont Library

Critical thinking skills, what is inductive reasoning.

  • Logical Fallacies
  • Need Research Help?

Research Librarian

Profile Photo

Inductive reasoning: conclusion merely likely Inductive reasoning begins with observations that are specific and limited in scope, and proceeds to a generalized conclusion that is likely, but not certain, in light of accumulated evidence. You could say that inductive reasoning moves from the specific to the general. Much scientific research is carried out by the inductive method: gathering evidence, seeking patterns, and forming a hypothesis or theory to explain what is seen.

Conclusions reached by the inductive method are not logical necessities; no amount of inductive evidence guarantees the conclusion. This is because there is no way to know that all the possible evidence has been gathered, and that there exists no further bit of unobserved evidence that might invalidate my hypothesis. Thus, while the newspapers might report the conclusions of scientific research as absolutes, scientific literature itself uses more cautious language, the language of inductively reached, probable conclusions:

What we have seen is the ability of these cells to feed the blood vessels of tumors and to heal the blood vessels surrounding wounds. The findings suggest that these adult stem cells may be an ideal source of cells for clinical therapy. For example, we can envision the use of these stem cells for therapies against cancer tumors [...].

Because inductive conclusions are not logical necessities, inductive arguments are not simply true. Rather, they are cogent: that is, the evidence seems complete, relevant, and generally convincing, and the conclusion is therefore probably true. Nor are inductive arguments simply false; rather, they are  not cogent .

It is an important difference from deductive reasoning that, while inductive reasoning cannot yield an absolutely certain conclusion, it can actually increase human knowledge (it is  ampliative ). It can make predictions about future events or as-yet unobserved phenomena.

For example, Albert Einstein observed the movement of a pocket compass when he was five years old and became fascinated with the idea that something invisible in the space around the compass needle was causing it to move. This observation, combined with additional observations (of moving trains, for example) and the results of logical and mathematical tools (deduction), resulted in a rule that fit his observations and could predict events that were as yet unobserved.

  • << Previous: Deduction
  • Next: Abduction >>
  • Last Updated: Jul 28, 2020 5:53 PM
  • URL: https://library.sewanee.edu/critical_thinking

Research Tools

  • Find Articles
  • Find Research Guides
  • Find Databases
  • Ask a Librarian
  • Learn Research Skills
  • How-To Videos
  • Borrow from Another Library (Sewanee ILL)
  • Find Audio and Video
  • Find Reserves
  • Access Electronic Resources

Services for...

  • College Students
  • The School of Theology
  • The School of Letters
  • Community Members

Spaces & Places

  • Center for Leadership
  • Center for Speaking and Listening
  • Center for Teaching
  • Floor Maps & Locations
  • Ralston Listening Library
  • Research Help
  • Study Spaces
  • Tech Help Desk
  • Writing Center

About the Library

  • Where is the Library?
  • Library Collections
  • New Items & Themed Collections
  • Library Policies
  • Library Staff
  • Friends of the Library

Jessie Ball duPont Library, University of the South

178 Georgia Avenue, Sewanee, TN 37383

931.598.1664

  • Page Content
  • Sidebar Content
  • Main Navigation
  • Quick links

Back to Section Home

  • All TIP Sheets

Deductive, Inductive and Abductive Reasoning

  • Objective and Subjective Claims
  • Conspiracy Theory & Conspiracism
  • Fallacies and Propaganda

TIP Sheet DEDUCTIVE, INDUCTIVE, AND ABDUCTIVE REASONING

Reasoning is the process of using existing knowledge to draw conclusions, make predictions, or construct explanations. Three methods of reasoning are the deductive, inductive, and abductive approaches.

Deductive reasoning: conclusion guaranteed Deductive reasoning starts with the assertion of a general rule and proceeds from there to a guaranteed specific conclusion. Deductive reasoning moves from the general rule to the specific application: In deductive reasoning, if the original assertions are true, then the conclusion must also be true. For example, math is deductive:

If x = 4 And if y = 1 Then 2x + y = 9

In this example, it is a logical necessity that 2x + y equals 9; 2x + y must equal 9. As a matter of fact, formal, symbolic logic uses a language that looks rather like the math equality above, complete with its own operators and syntax. But a deductive syllogism (think of it as a plain-English version of a math equality) can be expressed in ordinary language:

If entropy (disorder) in a system will increase unless energy is expended, And if my living room is a system, Then disorder will increase in my living room unless I clean it.

In the syllogism above, the first two statements, the propositions or premises , lead logically to the third statement, the conclusion . Here is another example:

A medical technology ought to be funded if it has been used successfully to treat patients. Adult stem cells are being used to treat patients successfully in more than sixty-five new therapies. Adult stem cell research and technology should be funded.

A conclusion is sound (true) or unsound (false), depending on the truth of the original premises (for any premise may be true or false). At the same time, independent of the truth or falsity of the premises, the deductive inference itself (the process of "connecting the dots" from premise to conclusion) is either valid or invalid . The inferential process can be valid even if the premise is false:

There is no such thing as drought in the West. California is in the West. California need never make plans to deal with a drought.

In the example above, though the inferential process itself is valid, the conclusion is false because the premise, There is no such thing as drought in the West , is false. A syllogism yields a false conclusion if either of its propositions is false. A syllogism like this is particularly insidious because it looks so very logical–it is, in fact, logical. But whether in error or malice, if either of the propositions above is wrong, then a policy decision based upon it ( California need never make plans to deal with a drought ) probably would fail to serve the public interest.

Assuming the propositions are sound, the rather stern logic of deductive reasoning can give you absolutely certain conclusions. However, deductive reasoning cannot really increase human knowledge (it is nonampliative ) because the conclusions yielded by deductive reasoning are tautologies -statements that are contained within the premises and virtually self-evident. Therefore, while with deductive reasoning we can make observations and expand implications, we cannot make predictions about future or otherwise non-observed phenomena.

Inductive reasoning: conclusion merely likely Inductive reasoning begins with observations that are specific and limited in scope, and proceeds to a generalized conclusion that is likely, but not certain, in light of accumulated evidence. You could say that inductive reasoning moves from the specific to the general. Much scientific research is carried out by the inductive method: gathering evidence, seeking patterns, and forming a hypothesis or theory to explain what is seen.

Conclusions reached by the inductive method are not logical necessities; no amount of inductive evidence guarantees the conclusion. This is because there is no way to know that all the possible evidence has been gathered, and that there exists no further bit of unobserved evidence that might invalidate my hypothesis. Thus, while the newspapers might report the conclusions of scientific research as absolutes, scientific literature itself uses more cautious language, the language of inductively reached, probable conclusions:

What we have seen is the ability of these cells to feed the blood vessels of tumors and to heal the blood vessels surrounding wounds. The findings suggest that these adult stem cells may be an ideal source of cells for clinical therapy. For example, we can envision the use of these stem cells for therapies against cancer tumors [...].1

Because inductive conclusions are not logical necessities, inductive arguments are not simply true. Rather, they are cogent: that is, the evidence seems complete, relevant, and generally convincing, and the conclusion is therefore probably true. Nor are inductive arguments simply false; rather, they are not cogent .

It is an important difference from deductive reasoning that, while inductive reasoning cannot yield an absolutely certain conclusion, it can actually increase human knowledge (it is ampliative ). It can make predictions about future events or as-yet unobserved phenomena.

For example, Albert Einstein observed the movement of a pocket compass when he was five years old and became fascinated with the idea that something invisible in the space around the compass needle was causing it to move. This observation, combined with additional observations (of moving trains, for example) and the results of logical and mathematical tools (deduction), resulted in a rule that fit his observations and could predict events that were as yet unobserved.

Abductive reasoning: taking your best shot Abductive reasoning typically begins with an incomplete set of observations and proceeds to the likeliest possible explanation for the set. Abductive reasoning yields the kind of daily decision-making that does its best with the information at hand, which often is incomplete.

A medical diagnosis is an application of abductive reasoning: given this set of symptoms, what is the diagnosis that would best explain most of them? Likewise, when jurors hear evidence in a criminal case, they must consider whether the prosecution or the defense has the best explanation to cover all the points of evidence. While there may be no certainty about their verdict, since there may exist additional evidence that was not admitted in the case, they make their best guess based on what they know.

While cogent inductive reasoning requires that the evidence that might shed light on the subject be fairly complete, whether positive or negative, abductive reasoning is characterized by lack of completeness, either in the evidence, or in the explanation, or both. A patient may be unconscious or fail to report every symptom, for example, resulting in incomplete evidence, or a doctor may arrive at a diagnosis that fails to explain several of the symptoms. Still, he must reach the best diagnosis he can.

The abductive process can be creative, intuitive, even revolutionary.2 Einstein's work, for example, was not just inductive and deductive, but involved a creative leap of imagination and visualization that scarcely seemed warranted by the mere observation of moving trains and falling elevators. In fact, so much of Einstein's work was done as a "thought experiment" (for he never experimentally dropped elevators), that some of his peers discredited it as too fanciful. Nevertheless, he appears to have been right-until now his remarkable conclusions about space-time continue to be verified experientially.

References 1. Verfaillie, Catherine. "Adult Bone Marrow Stem Cells Can Become Blood Vessels." News release from the University of Minnesota. Jan. 30, 2002. June 1, 2005. < http://www.sciencedaily.com/releases/2002/01/020131074645.htm >

2. Thagard, Paul and Cameron Shelley. "Abductive reasoning: Logic, visual thinking, and coherence." Waterloo, Ontario: Philosophy Department, Univerisity of Waterloo, 1997. June 2, 2005. < http://cogsci.uwaterloo.ca/Articles/Pages/%7FAbductive.html >

Home | Calendars | Library | Bookstore | Directory | Apply Now | Search for Classes | Register | Online Classes  | MyBC Portal MyBC -->

Butte College | 3536 Butte Campus Drive, Oroville CA 95965 | General Information (530) 895-2511

Inductive and deductive justification of knowledge: epistemological beliefs and critical thinking at the beginning of studying mathematics

  • Open access
  • Published: 14 November 2020
  • Volume 106 , pages 117–132, ( 2021 )

Cite this article

You have full access to this open access article

inductive and deductive reasoning in critical thinking

  • Benjamin Rott   ORCID: orcid.org/0000-0002-8113-1584 1  

9455 Accesses

7 Citations

1 Altmetric

Explore all metrics

Epistemological beliefs are considered to play an important role in processes of learning and teaching. However, research on epistemological beliefs is confronted with methodological issues as for traditionally used self-report instruments with closed items, problems with social desirability, validity, and capturing domain-specific aspects of beliefs have been reported. Therefore, a new instrument with open items has been developed to capture mathematics-related epistemological beliefs, focusing on the justification of knowledge. Extending a previous study, the study at hand uses a larger and more diverse sample as well as a refined methodology. In total, 581 mathematics students (Bachelor of Science as well as pre-service teachers) completed the belief questionnaire and a test for mathematical critical thinking. The results confirm that beliefs can empirically be distinguished into belief position and belief argumentation, with only the latter being correlated to critical thinking.

Similar content being viewed by others

The use of cronbach’s alpha when developing and reporting research instruments in science education, effects of a rubric for mathematical reasoning on teaching and learning in primary school.

inductive and deductive reasoning in critical thinking

A review of the benefits and drawbacks of high-stakes final examinations in higher education

Avoid common mistakes on your manuscript.

1 Introduction

Studying beliefs has a long tradition in educational research generally (Hofer & Pintrich, 1997 ) and mathematics education specifically (Thompson, 1992 ; Philipp, 2007 ). A special emphasis has been put on epistemological (or epistemic ) beliefs (EB), which are beliefs about the nature of knowledge and knowing (Hofer & Pintrich, 1997 ). Research indicates that reflected or sophisticated EB are correlated with greater learning success (e.g., Hofer & Pintrich, 1997 ; Trautwein & Lüdtke, 2007 ). Students with sophisticated EB, that is, students who understand the way that scientific knowledge is created and founded, show better integrated and deeper knowledge than students with less sophisticated EB; in contrast, students seem to learn more superficially if they are convinced that knowledge is stable and secure and only needs to be passed on by authorities (Tsai, 1998a , 1998b ). Additionally, sophisticated EB are considered not only as a prerequisite to successfully completing higher education but also for active participation in modern science- and technology-based societies (e.g., Bromme, 2005 ).

Much of the work on students’ EB has been done under the assumption that such beliefs are domain-general (Muis, 2004 , p. 346). There are, however, newer studies suggesting that EB can be domain-specific (Muis, 2004 ; Stahl & Bromme, 2007 ). Thus, domain-specific EB have been identified as an area in need of further study (Hofer & Pintrich, 1997 , pp. 124 ff.).

The study described here deals with mathematics-related EB (MEB); it focuses on what students believe and answer to typical questions regarding epistemology discussed in the philosophy of mathematics (e.g., Barrow, 1992 ; Horsten, 2018 ): What does it mean to do mathematics? How do mathematicians work when they generate or find new mathematical knowledge? Is such knowledge created or discovered? And, most important for this study, do mathematicians justify their ideas and hypotheses either deductively or inductively or do they use both ways (cf. Lakatos, 1976 ; Russell, 1919 )?

2 Background

In this section, conceptualizations of EB as well as ways of measuring EB are discussed. Thereafter, conceptualizations and ways of measuring critical thinking are introduced. Finally, research goals for the study at hand are presented.

2.1 Epistemological beliefs

In a literature review, Hofer and Pintrich ( 1997 ) identified two general areas of EB, nature of knowledge and nature or process of knowing , with two dimensions each. One of the dimensions of the latter area, which is important for the study at hand, is justification of knowledge , addressing assumptions about how to evaluate knowledge claims, how to use empirical evidence, how to refer to experts, and how to justify knowledge claims.

Beliefs generally and EB specifically can be conceptualized on a scale from subconscious beliefs over conscious beliefs to knowledge of concepts, rules, and so forth, or, in other words, from subjective (not necessarily justified) to objective knowledge (cf. Murphy & Mason, 2006 ; Pehkonen, 1999 ). Under this perspective, as knowledge is built, beliefs converge towards objective knowledge (i.e., “true belief” in a Platonic sense), which in its purest form is not accessible to humans (see Philipp, 2007 , pp. 266 ff. for a further discussion). Therefore, all human knowledge is some kind of belief. In this line of thought, Stahl and Bromme ( 2007 ) proposed a distinction between (associative-)connotative and (explicit-)denotative aspects of EB. The former are based on feelings and often context-free, whereas the latter are reflected upon and context-specific. Especially denotative beliefs are dependent on a person’s knowledge and experience. For example, mathematics is often identified as a deductive science that is characterized by well-formulated theorems and formal proofs. Without ever having worked like a research mathematician or knowing that famous mathematicians like Leonhard Euler collected countless examples and conducted “quasi experiments” before being able to formulate and deductively prove a theorem (cf. Pólya, 1954 ), one cannot comprehend inductive aspects of mathematics. The study at hand focuses on denotative beliefs.

2.2 Measuring epistemological beliefs

At the beginning of research on EB, mainly interviews were conducted; later, the use of questionnaires became the norm, as they could be used for much larger samples (cf. Hofer & Pintrich, 1997 ). Questionnaires with closed items have proven to be most popular—especially the instrument developed by Schommer ( 1990 ). Footnote 1 In such instruments, the respondents are presented with statements that they should agree or disagree with on a Likert scale. Sample items from Schommer’s Epistemological Belief Questionnaire are “You can believe almost everything you read”; “Truth is unchanging”; and “Scientists can ultimately get to the truth.”

In such questionnaire studies, how “good” (i.e., reflective or sophisticated) test persons’ beliefs are is determined by their answers, that is, the positions they agree to, like “knowledge is certain or uncertain.” In a somewhat simplified way, “naïve” beliefs are attributed to respondents who consider knowledge as safe and stable, whereas more sophisticated beliefs are attributed to respondents who agree on the uncertainty and tentativeness of knowledge (Hofer & Pintrich, 1997 ). With regard to the history of science, this approach is reasonable—thinking of Newton’s theory of gravity, which has been superseded by Einstein’s theory of relativity.

There are, however, problems with the use of closed self-report questionnaires such as the problem of social desirability; that is, test persons often mark answers that they think are expected of them (Di Martino & Sabena, 2010 ; Safrudiannur, 2020 ). Other problems of such instruments relate to their reliability and validity (Lederman, Abd-El-Khalick, Bell, & Schwartz, 2002 ; Stahl, 2011 ) as well as to difficulties in their capability of measuring general or domain-specific EB (Muis, 2004 ; Stahl & Bromme, 2007 ).

Tying sophistication to the belief position may cause additional validity problems: For example, Clough ( 2007 , p. 3) points out that belief positions “may be easily misinterpreted and abused” by being memorized as declarative knowledge. And Schommer herself argues for a “need for balance”; that is, extreme beliefs in positions like “knowledge is certain or tentative” are unfavorable as they might prevent persons from acting appropriately (Schommer-Aikins, 2004 , p. 21). Especially for mathematics, in addition to arguments for uncertainty, there are also good arguments for knowledge not necessarily being regarded as uncertain and tentative (e.g., axioms and deductive reasoning) by persons with a sophisticated background. An interview study by Rott, Leuders, and Stahl ( 2014 ) has shown that research mathematicians partly argue for positions like certainty and stability of mathematical knowledge—and they do so in a very convincing way; that, is their argumentation shows reflected beliefs even though they argue for a position associated with “naïve” beliefs. Regarding the justification of knowledge, mathematics unlike other natural and educational sciences offers unique ways of working inductively as well as deductively (Horsten, 2018 ; Pólya, 1954 ). Again, either position (inductive vs. deductive) and its importance can be supported with convincing arguments (Rott & Leuders, 2016a ). Therefore, at least for the domain of mathematics, it seems plausible to not tie the degree of sophistication to the belief position but instead to its argumentation (i.e., supporting the position inadequately or sophisticatedly). This, however, implies that it is not sufficient to just collect belief positions (via closed items).

These difficulties with traditional instruments led us to develop a questionnaire with open items (Rott & Leuders, 2016a ). In this questionnaire, the participants are asked to give their written opinion on mathematical-philosophical questions as those given in Section 1 to assess their MEB. In a quantitative study using this questionnaire with 439 pre-service teachers (PST), we were able to show that belief position and argumentation ( inflexible vs. sophisticated ) can be differentiated (i.e., coded with high interrater agreement) and that a more convincing argumentation is not tied to one specific position (i.e., belief position and argumentation are statistically independent from each other).

However, the study by Rott and Leuders ( 2016a ) was limited (a) to mathematics PST, whereby a larger and more diverse sample would be desirable. Additionally, (b) belief argumentation was only coded dichotomously, whereby investigating students’ arguing in a more differentiated way would be preferable. Therefore, in order to replicate and expand the results from the previous study, the study presented here was conducted.

2.3 Critical thinking

To be able to locate beliefs within competence models for assessment in higher education (cf. Blömeke, Zlatkin-Troitschanskaia, Kuhn, & Fege, 2013 ; Baumert & Kunter, 2013 ), this study focuses not on only one cognitive dimension, that is, beliefs, but also on a second cognitive dimension, that is, an aspect of mathematical ability. We do not measure mathematical ability in terms of assessing success of attending university courses; instead, the use of students’ knowledge is captured by drawing on the concept of critical thinking (CT) (Rott, Leuders, & Stahl, 2015 ).

Although CT has its roots in several fields of research (e.g., philosophy, psychology, and education) and there are many different conceptualizations of CT, it is commonly attributed to the following abilities (cf. Facione, 1990 ; Lai, 2011 ): analyzing arguments, claims, or evidence; making inferences using inductive or deductive reasoning; judging or evaluating and making decisions; or solving problems. Consistent with this list, Jablonka ( 2014 , p. 121) defines CT as

a set of generic thinking and reasoning skills, including a disposition for using them, as well as a commitment to using the outcomes of CT as a basis for decision-making and problem solving.

All cited characterizations imply that CT is not a simple trait but a complex bundle of traits that cannot be located within one domain alone. However, as Facione ( 1990 , p. 14) points out, there are domain-specific manifestations of CT. A mathematics-specific interpretation of CT, the importance of which is stressed by Jablonka ( 2014 ), is used in the study at hand.

To conceptualize CT, we refer to Stanovich and Stanovich ( 2010 ), who adapt and extend dual process theory (e.g., Kahneman, 2011 ). They distinguish fast, automatic, emotional, subconscious thinking, that is, the “autonomous mind,” from slow, effortful, logical, conscious thinking, the latter encompassing the “algorithmic” and the “reflective mind.” In this model, CT is identified within the reflective mind; persons think critically when they engage in hypothetical thinking or when autonomous and algorithmic processes are checked and, if necessary, overridden (Stanovich & Stanovich, 2010 )—which fits Jablonka’s definition of CT as thinking and reasoning skills, including a disposition for using them. Therefore, we define CT in mathematics as those processes that consciously regulate the algorithmic use of mathematical procedures (Rott et al., 2015 ).

2.4 Measuring critical thinking

There have been several attempts at designing tests for CT, for example, the Ennis-Weir test in which participants are presented with a letter that contains complex arguments to which they are supposed to write a response (Ennis & Weir, 1985 ). Another example is the Watson-Glaser Critical Thinking Appraisal (Pearson Education, 2012 ), which consists of five subtests. In one of the subtests, for example, the probability of truth of inferences based on given information has to be rated on a 5-point Likert scale ranging from “true” to “false.” These tests, however, are domain-general and do not take into account mathematics-specific characteristics of CT (cf. Jablonka, 2014 ).

Stanovich and Stanovich ( 2010 ) suggest using tasks with an obvious but incorrect solution that needs to be checked thoroughly. Finding correct solutions of such tasks can help identify the disposition of thinking critically. We used this idea to construct a test for mathematical CT (MCT), arguing that such an assessment should reflect discipline-specific processes without requiring higher level mathematics (Rott et al., 2015 ). For this test, we adapted items from Stanovich, from Frederick ( 2005 ) who explicitly mentions the “mathematical content” (Frederick, 2005 , p. 37) of his Cognitive Reflection Test, and others; details and examples of this test are presented in Section 3 .

In previous studies with PST (Rott et al., 2015 , Rott & Leuders, 2016a ), this MCT test was able to differentiate between different programs of study: the more demanding the mathematics part of the program of studies, the greater the mean scores in the MCT test (PST for upper secondary school scored significantly higher than for lower secondary and primary schools). Additionally, this test supported the differentiation of MEB in position and argumentation by showing that belief position is not correlated to MCT, whereas belief argumentation is significantly positively correlated to MCT (i.e., students who argue sophisticatedly have greater scores in the MCT test; Rott et al., 2015 ; Rott & Leuders, 2016a ).

2.5 Research goals

Building on the literature of the belief and knowledge dimensions and previous studies, with the goal to present an instrument for measuring MEB, the following hypotheses were tested:

In students’ denotative MEB, belief position (whether they believe that mathematics is justified mostly deductively or inductively) and belief argumentation (how they reason for their position) can be distinguished. This was evaluated via an interrater agreement score.

As shown in Rott and Leuders ( 2016a ), we expect students’ belief positions and argumentations in this study to be statistically independent of each other, using a chi-square test.

Students’ MCT scores will differ between programs of study: students aiming for “higher mathematics” (PST for upper secondary schools as well as Bachelor of Science who were attending “Analysis” and “Linear Algebra”) are more successful in MCT than students with less demanding mathematics lectures (PST for lower secondary, primary, and special education schools who were attending “Elementary Mathematics”). This was evaluated with an ANOVA.

As observed in previous studies, MCT scores will not be correlated with belief position but rather with belief argumentation. This was evaluated with another ANOVA.

3 Methodology

In this section, information on the study and its participants are given; thereafter, methods for measuring MEB and MCT are presented.

Several instruments were used in this study: a test for connotative EB (the CAEB by Stahl and Bromme ( 2007 )), questionnaires for denotative MEB in two dimensions, and certainty and justification of knowledge, as well as the MCT test. In this article, only the latter two are reported.

3.1 Participants

At the beginning of the winter semester of 2017/2018, the instruments were distributed within mathematics lectures (or related tutorials) that specifically addressed students at the beginning of their studies at the University of Cologne, Germany. Choosing those lectures gave us access to all students with a focus on mathematics (see Table 3 for the students’ programs of study).

Participating in the study or refusing to participate did not affect the students’ outcome for those lectures. In total, more than 600 students (with individually generated pseudonyms) voluntarily participated, of whom 581 students (65% female) completed the instruments. Even though the study was conducted in lectures for first semester students, not all students were in their first semester: some repeated lectures or others had changed their program of study. The mean number of semesters in the participants’ current programs of study was 1.2 (standard deviation 0.8); the mean number of their total number of semesters at a university was 1.6 (standard deviation 1.6).

3.2 Measuring mathematics-related epistemological beliefs

The interview study by Rott et al. ( 2014 ) has shown that it is not easy (especially for students) to argue context-free (i.e., without given positions to refer to) in relation to challenging philosophical questions. Therefore, the interviewees were presented with two introductory quotations, with which they should agree or disagree at the beginning of the interviews. The corresponding statements are not necessarily mutually exclusive, but stimulate a discussion about the different positions. This approach has worked well insofar as the students were better able to discuss the topic. Therefore, we also used the introductory quotations in our questionnaire (Rott & Leuders, 2016a ). The prompts regarding the dimension justification of knowledge are given in Table 1 . Footnote 2

The answers of the participants to the partial questions (a–d) were regarded as one text block and then coded in relation to two dimensions: (1) position and (2) argumentation . The first code records which position the participants have chosen, deductive or inductive . In this study, only five participants (less than 1%) were able to give good reasons, examples, and/or situations for both positions; arguing for both positions—not untypical for experts—can therefore be neglected in the present study with first-year students.

The second code refers to the argumentation —in contrast to the previous study, in which two values (inflexible and sophisticated) were coded (Rott & Leuders, 2016a ), four values (inflexible is now divided into “inadequate” and “simple”; “no argumentation” was added) can now be coded with high interrater agreement (Cohen’s κ  = 0.82; see the Appendix for details). After calculating the interrater reliability, the differing scores have been re-coded by the two raters (the author and a research assistant) consensually.

The students’ texts were analyzed for the quality of their arguments. As it was not the goal of this research to analyze the structure of their reasoning, the texts were not coded with the Toulmin model (e.g., Knipping, 2008 ). However, the terms by Toulmin ( 2003 ) are used to operationalize the following categories with the claim for “deductive” or “inductive discovery.”

No Argumentation is coded if all text fields are empty. This category is also assigned if only individual words have been entered, e.g., “no” for the question whether one has gained experience with it, but the other fields have not been filled in. Inadequate is coded if the argumentation does not fit the epistemological meaning of the question. This could be either a (more or less) consistent and elaborate argumentation that lacks reference to the question (e.g., arguing for the importance of “inductive methods like calculating an example in problem solving”) or an argumentation with reference to the question that lacks adequate warrants (e.g., an overemphasis of subjective experiences instead of arguing for mathematical discovery in general), maybe because they never thought about such a question. Simple is coded if the argumentation contains warrants and/or data that fit the epistemological meaning of the question, but does not reveal an in-depth examination of the topic. Sometimes, facts and opinions are not clearly separated. Usually, the contradictory position is not rebutted. Examples for warrants or data of this category would be “mathematical knowledge is deductively justified because one proceeds logically” or “mathematical reasoning starts with axioms,” respectively. This definition includes argumentations that simply reproduce warrants from the introductory quotations (without additional reflection). Sophisticated is coded if the argumentation contains warrants and/or data and/or backings that show an in-depth understanding of the topic, that is if (at least two) arguments have been combined to reach a conclusive argument, for example “By great mathematicians like Leonhard Euler and Bernhard Riemann research diaries have been handed down. This shows that they have tried out examples page by page before setting up rules—so they have proceeded inductively.” A single (good) argument can be sufficient, if it is not taken from the introductory quotations. Usually, data are stated explicitly and the contradictory position is rebutted.

In this way, representatives were found for all possible combinations of position and argumentation; one example each is given in Table 2 (translated from German to English).

3.3 Measuring mathematical critical thinking

We have developed a mathematical test with 11 items similar to and including the well-known bat-and-ball task:

A bat and a ball cost $ 1.10 in total. The bat costs $ 1 more than the ball. How much does the ball cost? (cf. Frederick, 2005 ; Kahneman, 2011 )

All items were designed to be solvable with simple mathematics procedures (i.e., no items were used that resemble written argumentations like in the Ennis & Weir, 1985 test), but suggest an intuitive but incorrect answer like $ 0.10 in the bat-and-ball task.

The items allow measuring the disposition to reflect upon “obvious” algorithms and solutions, that is, critical thinking (see Section 2 ). All items have been validated both quantitatively and qualitatively (Rott & Leuders, 2016b ), and were rated dichotomously with 1 point for a correct and 0 point for a wrong solution. Because of the non-linear nature of raw data Footnote 3 (Boone, Staver, & Yale, 2014 , p. 6 ff.), a Rasch model (software Winsteps Version 3.91.0; Linacre, 2005 ) was used to transform the students’ test scores into values on a one-dimensional competence scale (confirming that all items measure MCT instead of other traits like computational skills). In the previous study, two items had been eliminated because of underdiscrimination (Rott et al., 2015 ). Here, both items have been replaced and the Rasch analysis shows that all MCT items are within the reasonable mean-square (MNSQ) ranges (between 0.5 and 1.5; Boone et al., 2014 , p. 166 f.). All MCT items, their empirical solution rates, and their Rasch values are presented in the Appendix .

The students filled in the instruments on paper and were given as much time as they needed to do so. On average, they took 10–12 min for the MCT test and thereafter 12–15 min for the MEB test regarding justification of knowledge.

4 Results and discussion

In this section, results of the MEB and MCT tests are analyzed individually; thereafter, possible connections between both cognitive dimensions are explored.

4.1 Mathematics-related epistemological beliefs

Table 3 shows the distribution of students for the position code, sorted by their programs of study. Overall, in those groups, the numbers of students arguing for either position are surprisingly evenly distributed. Looking at the curricula for German Secondary School (KMK, 2003 ), we had expected a higher proportion for the position “inductive” in the study at hand. Oftentimes, algorithms and theorems are presented by teachers and then practiced with exercises, but not proven, meaning deductive reasoning plays a negligible role. Fittingly, in the previous study (Rott & Leuders, 2016a ), almost two-thirds of the students argued for “inductive” and only one-third for “deductive.” Further research is needed to explain the distribution in the study at hand.

As stated above, the argumentation code could be applied with high interrater agreement (confirming H1). The distribution of the coded statements is given in Table 4 . Additionally, the mean number of words for each category is given (for comparison, the introductory prompts in the questionnaire each contain 50 words). There is a general trend of “more words go along with a higher argumentation category.” However, “inadequate” has a higher average word count than “simple;” together with the minima Footnote 4 and maxima, this shows that a large number of words is not sufficient for a high category. (See the Appendix for a further analysis.)

In Table 5 , the distribution of the argumentation code is broken down by the participants’ programs of study. The bottom line in the “No argumentation” column reveals that 30.6% of the participants did not write arguments in this part of the questionnaire. The low percentages in the column “Inadequate” suggest that almost all students who were willing to write down an answer were at least able to use warrants from the given prompts (i.e., being coded with “simple”). Finally, the column “Sophisticated” shows that, with regard to their argumentation, the Bachelor of Science (Mathematics and Business Mathematics combined) students show higher absolute and relative numbers than the PSTs (for different school types). Some of the students (11.1%) who have decided to start studying mathematics as a major thus seem to have thought intensively about the subject and its epistemology.

The number of students with sophisticated codes is low (3.4%; see Table 5 ). This is in line with the previous study which had shown that arguing sophisticatedly is difficult for students (7.3% in Rott & Leuders, 2016a ). The even lower proportion in this study was expected, as most participants were in their first semester and therefore inexperienced in university-level mathematics and its philosophy (a semester effect was also observed in Rott et al., 2015 ).

A chi-square test (Table 6 ) comparing belief position and argumentation (expected frequencies assuming statistical independence are given in parentheses) gives no evidence for both dimensions being statistically dependent on each other. This confirms the corresponding result from Rott and Leuders ( 2016a ) using the finer grained argumentation coding (H2). This finding supports the validity concerns regarding traditional questionnaires that measure belief argumentation by collecting belief positions (see Section 2 ).

4.2 Mathematical critical thinking

The Rasch model of the participants’ MCT ability provides metrical latent variables ranging from − 3 to 3 with low values indicating a low ability (Table 7 ). The MCT results (column “Rasch-M”) reveal significant differences between the groups of different programs of study (one-way ANOVA: F  = 19.2; df  = 5; p  < 0.0001). Tukey post hoc tests (all either non-significant or p  < 0.01) show that there are three groups: PSTs for Upper Secondary schools, Business Informatics students, and Bachelor of Science students form the top group. They do not differ significantly from each other, but achieve significantly higher scores than all other groups. PSTs for Lower Secondary Schools and Special Education Schools do not differ from each other, but score significantly higher than PST for Primary Schools (confirming H3). In Cologne, students of the latter group are obliged to attend mathematics lectures (as all primary school teachers in Germany have to teach mathematics); all other students chose mathematics (as their major or as a school subject to teach) voluntarily. As most of the students were at the beginning of their first semester, this result cannot be caused by their university education. Instead, it implies a selection effect for the different programs of study. Students with higher MCT abilities seem to be more likely to choose study programs with higher mathematical demands.

4.3 Connections between mathematics-related epistemological beliefs and mathematical critical thinking

An interesting result from the previous study was a correlative relationship between the MEB codes and the results of the MCT test (Rott & Leuders, 2016a ). There was no correlation with regard to belief position ( deductive vs. inductive ), but a significant correlation with regard to belief argumentation (the latter was coded in two stages in the previous study: inflexible vs. sophisticated ).

Table 8 shows the mean values (and standard deviations) of the MCT Rasch scores, sorted according to the two belief dimensions. A two-way ANOVA (after excluding the data for “No position”) shows that there are no significant differences in the MCT scores for students holding different belief positions (totals in the rows “Deductive” and “Inductive”; F  = 0.26, df  = 1, p  = 0.61), but significant differences for belief argumentation (totals in the columns “No argumentation,” “Inadequate,” “Simple,” and “Sophisticated”; F  = 3.49, df  = 3, p  = 0.02) and no interaction effects ( F  = 2.14, df  = 3, p  = 0.09).

Tukey post hoc tests regarding argumentation show that students arguing sophisticatedly score higher in the MCT test than students with no, inadequate, or simple argumentations ( p  < 0.01); other differences are not significant. This confirms the results from the previous study (H4). An explanation for this finding might be the fact that both MCT and belief argumentation tap into a kind of reflectiveness of the students, which belief position does not. (Versions of this table for sub-groups of different programs of study are given in the Appendix .)

The results in this section rely on statistical methods to differentiate patterns that occur by chance from patterns with meaning. The use of significance testing should be reflected upon as this might lead to non-reproducible results—discussed as the “replication crisis” (cf. Inglis, Schukajlow, Van Dooren, & Hannula, 2018 )—or false conclusions—discussed as the “ p value problem” (cf. Matthews, 2017 ). Regarding the former, researchers call for more replication studies. There is, however, no easy solution to fix the latter; no substitute method fixes all problems. Instead, we have to be careful regarding conclusions drawn from significance testing. Triangulating and replicating results—which is one of the goals of this study—might be one way of dealing with such issues.

5 General discussion

Research on beliefs, especially on domain-specific EB, is still an open field for theoretical and methodological developments (Safrudiannur, 2020 ). Studies on EB, especially studies using a quantitative approach (e.g., Schommer, 1990 ), often only address belief positions, deriving a degree of sophistication from the positions. Theoretical considerations, however, suggest that such an approach might not be sufficient to adequately capture belief sophistication. At least for the domain of mathematics, there are convincing arguments supporting different belief positions, even those that might generally be associated with “naïve” beliefs (e.g., regarding the certainty of knowledge). Therefore, the study at hand presents and reflects upon an instrument for measuring denotative MEB in which university students’ belief positions and argumentations are differentiated. Both aspects of beliefs can be coded separately and the results suggest that they are statistically independent of each other. This result has clear implications for research: Many studies (like COACTIV; Krauss et al., 2008 ) use beliefs (including EB) as covariates; generally, beliefs are collected with self-report Likert scale items measuring belief positions rather than argumentations. In the study at hand, however, correlations of both aspects of beliefs with another cognitive dimension, that is, mathematical critical thinking (measured with an MCT test), suggest that belief argumentations seem to be more meaningful than belief positions (as the former show significant correlations to MCT scores whereas the latter do not). It could be inferred that measuring beliefs should not only address belief positions.

Further implications, especially for teaching mathematics (at schools or universities), need to be studied in the future. It seems plausible that teachers should know arguments both speaking for and against certain epistemological positions if they want to teach their students about mathematics, that is, what it means to think and act like mathematicians, instead of only about mathematical procedures.

For the sub-group of the PST in the sample, the study at hand confirms previous results on MEB (Rott & Leuders, 2016a ). In a wide sense of the term, this study could be interpreted as a replication, using a refined methodology (as claimed by Maxwell, Lau, & Howard, 2015 ) to address limitations of the original study, extending the results to a more diverse sample (mathematics, business mathematics, and informatics majors).

Additionally, using open items instead of Likert scale items could help in decreasing the social desirability problem (cf. Di Martino & Sabena, 2010 ) as the students cannot simply choose statements that they think are desirable; instead, participants have to argue for themselves.

However, the need to write down answers instead of checking boxes is not only a feature but also a limitation of this study. The proportion of participants (30.6%) not arguing indicate that the questionnaire also measures the willingness to write down an argumentation. Additionally, the choice of specific prompts starting the discussion and the fact that they do not represent a real dichotomy might influence participants’ answers. The latter is not regarded as problematic because the prompts succeeded in initiating discussions; nevertheless, different prompts will be tested.

An important factor might by the epistemological content, as only justification of knowledge has been addressed in this article. However, previous studies (Rott et al., 2015 ) have shown similar results for certainty of knowledge , indicating that independence of belief position and argumentation as well as correlations of the latter to MCT are not tied to a single EB dimension.

From the way the MEB questionnaire is designed, it follows that specific kinds of beliefs are measured. On the scale from subjective to objective knowledge (see Section 2 ), compared to most other studies (especially those in the tradition of Schommer, 1990 ), the beliefs that are measured in this study are more on the “objective” side, drawing heavily on knowledge about epistemology.

Another limitation regards the selection of the sample, as all participants are enrolled in one university. This limitation is mitigated because this specific university is very large and not limited to attracting a special group of students.

Familiarity of the participants with MCT items like the bat-and-ball task may present another limitation. However, this kind of research is not part of school or university curricula in Germany. Very similar solution rates of the MCT items in this and previous studies that were conducted in two different cities also suggest that knowledge about the items does not distort the results: for example, the bat-and-ball task had a solution rate of 55% in Rott et al. ( 2015 ), compared to a solution rate of 59% in this study; the dice task (red and green sides) had solution rates of 22% and 23%, respectively.

It will be very interesting to explore the influence of knowledge on the MEB test and the development of students’ MEB, respectively. As Bromme, Kienhues, and Stahl ( 2008 ) point out, knowledge gain does not necessarily lead to more sophisticated EB. The results presented in this article set a clear starting point, as exposure to knowledge that could distort the measurement previously to joining the university is very unlikely. In Germany, teaching epistemology is not part of any school curricula. Students with more knowledge have already participated in a continuation of this study: At the beginning of the winter terms 2018/2019 and 2019/2020, we collected data regarding MEB and MCT in mathematics lectures, mainly addressing students in respectively their third and fifth semester, in a pseudo-longitudinal and a real longitudinal study (with ca. 450 students in 2018, 850 students in 2019, and 100 who participated on all three survey dates). As the students have been experiencing university mathematics generally and proving specifically, it will be interesting to see how the distribution of belief position and argumentation have changed.

Schommer’s study introduced not only a quantitative approach of assessing EB but also the idea of EB being a multi-dimensional construct (Hofer & Pintrich, 1997 , p. 106).

The participants were given German translations of the prompts; in mathematical contexts, the German word for justification, “Begründung,” captures the same meaning.

Empirical data shows that a lot of students manage to gain 5 instead of 4 points, but only a handful of students gain 10 instead of 9 points, i.e., the latter difference of 1 is more difficult to gap than the former one.

An example for a “simple” argumentation with four words is “axioms and deductive reasoning” given by the student with the codename SBEN-23.

Barrow, J. D. (1992). Pi in the sky. Counting, thinking, and being . Oxford, UK: Oxford University Press.

Baumert, J., & Kunter, M. (2013). The COACTIV model of teachers’ professional competence. In M. Kunter, J. Baumert, W. Blum, U. Klusmann, S. Krauss, & M. Neubrand (Eds.), Cognitive activation in the mathematics classroom and professional competence of teachers. Results from the COACTIV project (pp. 25–48). New York, NY: Springer.

Chapter   Google Scholar  

Blömeke, S., Zlatkin-Troitschanskaia, O., Kuhn, C., & Fege, J. (Eds.). (2013). Modeling and measuring competencies in higher education . Rotterdam, the Netherlands: Sense.

Boone, W. J., Staver, J. R., & Yale, M. S. (2014). Rasch analysis in the human sciences . Dordrecht, the Netherlands: Springer.

Book   Google Scholar  

Bromme, R. (2005). Thinking and knowing about knowledge – A plea for critical remarks on psychological research programs on epistemological beliefs. In M. Hoffmann, J. Lenhard, & F. Seeger (Eds.), Activity and sign – Grounding mathematics education (pp. 191–201). New York, NY: Springer.

Bromme, R., Kienhues, D., & Stahl, E. (2008). Knowledge and epistemological beliefs: An intimate but complicate relationship. In M. S. Khine (Ed.), Knowing, knowledge and beliefs: Epistemological studies across diverse cultures (pp. 423–441). Dordrecht, the Netherlands: Springer.

Clough, M. P. (2007). Teaching the nature of science to secondary and post-secondary students: Questions rather than tenets, The Pantaneto Forum, Issue 25, http://www.pantaneto.co.uk/issue25/front25.htm , January. Republished (2008) in the California Journal of Science Education, 8 (2), 31–40.

Di Martino, P., & Sabena, C. (2010). Teachers’ beliefs: The problem of inconsistency with practice. In M. Pinto & T. Kawasaki (Eds.), Proceedings of the 34th Conference of the International Group for the Psychology of Mathematics Education (vol. 2, pp. 313–320). Belo Horizonte, Brazil: PME.

Google Scholar  

Ennis, R. H., & Weir, E. (1985). The Ennis-Weir critical thinking essay test . Pacific Grove, CA: Midwest Publications.

Facione, P. A. (1990). Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction . Executive summary “The Delphi Report”. Millbrae, CA: The California Academic Press.

Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives , 19 (4), 25–42.

Article   Google Scholar  

Hofer, B. K., & Pintrich, P. R. (1997). The development of epistemological theories: Beliefs about knowledge and knowing and their relation to learning. Review of Educational Research , 67 (1), 88–140.

Horsten, L. (2018). Philosophy of mathematics. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Spring 2018 Edition). Retrieved from https://plato.stanford.edu/archives/spr2018/philosophy-mathemat/philosophy-mathematics/ . Accessed 1 Oct 2020.

Inglis, M., Schukajlow, S., Van Dooren, W., & Hannula, M. (2018). Replication in mathematics education. In E. Bergqvist, M. Österholm, C. Granberg, & L. Sumpter (Eds.), Proceedings of the 42nd Conference of the International Group for the Psychology of Mathematics Education (vol. 1, pp. 195–196). Umeå, Sweden: PME.

Jablonka, E. (2014). Critical thinking in mathematics education. In S. Lerman (Ed.), Encyclopedia of mathematics education (pp. 121–125). Dordrecht, the Netherlands: Springer.

Kahneman, D. (2011). Thinking, fast and slow . London, UK: Penguin Books Ltd..

Knipping, C. (2008). A method for revealing structures of argumentations in classroom proving processes. ZDM-Mathematics Education , 40 , 427–441.

Krauss, S., Neubrand, M., Blum, W., Baumert, J., Kunter, M., & Jordan, A. (2008). Die Untersuchung des professionellen Wissens deutscher Mathematiklehrerinnen und -lehrer im Rahmen der COACTIV-Studie. Journal für Mathematik-Didaktik , 29 (3/4), 223–258.

Kultusministerkonferenz (KMK). (2003). Beschlüsse der Kultusministerkonferenz: Bildungsstandards im Fach Mathematik für den Mittleren Schulabschluss [Resolutions by the Standing Conference of Ministers of Education and Cultural Affairs: Educational standards in mathematics for the intermediate school leaving certificate],  http://www.kmk.org/fileadmin/veroeffentlichungen_beschluesse/2003/2003_12_04-Bildungsstandards-Mathe-Mittleren-SA.pdf . Accessed 1 Oct 2020.

Lai, E. R. (2011). Critical thinking: A literature review . Upper Saddle River, NJ: Pearson www.pearsonassessments.com/hai/images/tmrs/criticalthinkingreviewfinal.pdf . Accessed 1 Oct 2020.

Lakatos, I. (1976). Proofs and refutations . New York, NY: Cambridge University Press.

Lederman, N. G., Abd-El-Khalick, F., Bell, R. L., & Schwartz, R. S. (2002). Views of nature of science questionnaire: Toward valid and meaningful assessment of learners’ conceptions of nature of science. Journal of Research in Science Teaching , 39 (6), 497–521.

Linacre, J. M. (2005). Winsteps Rasch analysis software . PO Box 811322, Chicago IL 60681-1322, USA. http://www.winsteps.com/index.htm

Matthews, R. (2017). The ASA’s p -value statement, one year on. Significance , 14 (2), 38–41.

Maxwell, S. E., Lau, M. Y., & Howard, G. S. (2015). Is psychology suffering from a replication crisis? What does “failure to replicate” really mean? American Psychologist , 70 (6), 487–498.

Muis, K. R. (2004). Personal epistemology and mathematics: A critical review and synthesis of research. Review of Educational Research , 74 (3), 317–377.

Murphy, P. K., & Mason, L. (2006). Changing knowledge and beliefs. In P. A. Alexander & P. H. Winne (Eds.), Handbook of educational psychology – Second edition (pp. 305–324). London, UK: Lawrence Erlbaum Publishers.

Pearson Education. (2012). Watson-Glaser critical thinking appraisal user-guide and technical manual .  http://www.talentlens.co.uk/assets/news-and-events/watson-glaser-user-guide-and-technical-manual.pdf . Accessed 14 Mar 2019.

Pehkonen, E. (1999). Conceptions and images of mathematics professors on teaching mathematics in school. International Journal of Mathematical Education in Science and Technology , 30 (3), 389–397.

Philipp, R. A. (2007). Mathematics teachers’ beliefs and affect. In F. K. Lester (Ed.), Second handbook of research on mathematics teaching and learning (pp. 257–315). Charlotte, NC: Information Age.

Pólya, G. (1954). Mathematics and plausible reasoning. In Volume 1: Induction and analogy in mathematics . Princeton, MA: Princeton University Press.

Rott, B., & Leuders, T. (2016a). Inductive and deductive justification of knowledge: Flexible judgments underneath stable beliefs in teacher education. Mathematical Thinking and Learning , 18 (4), 271–286.

Rott, B., & Leuders, T. (2016b). Mathematical critical thinking: The construction and validation of a test. In C. Csikos, A. Rausch, & J. Szitányi (Eds.), Proceedings of the 40th Conference of the International Group for the Psychology of Mathematics Education (vol. 4, pp. 139–146). Szeged, Hungary: PME.

Rott, B., Leuders, T., & Stahl, E. (2014). “Is mathematical knowledge certain? – Are you sure?” An interview study to investigate epistemic beliefs. Mathematica Didactica , 37 , 118–132.

Rott, B., Leuders, T., & Stahl, E. (2015). Assessment of mathematical competencies and epistemic cognition of pre-service teachers. Zeitschrift für Psychologie , 223 (1), 39–46.

Russell, B. (1919). Introduction to mathematical philosophy . London, UK: Allen & Unwin.

Safrudiannur. (2020). Measuring beliefs quantitatively. Criticizing the use of Likert scale and offering a new approach . Wiesbaden, Germany: Springer.

Schommer, M. (1990). Effects of beliefs about the nature of knowledge on comprehension. Journal of Educational Psychology , 82 , 498–504.

Schommer-Aikins, M. (2004). Explaining the epistemological belief system: Introducing the embedded systemic model and coordinated research approach. Educational Psychologist , 39 (1), 19–29. https://doi.org/10.1207/s15326985ep3901_3

Stahl, E. (2011). The generative nature of epistemological judgments: Focusing on interactions instead of elements to understand the relationship between epistemological beliefs and cognitive flexibility. In J. Elen, E. Stahl, R. Bromme, & G. Clarebout (Eds.), Links between beliefs and cognitive flexibility – lessons learned (pp. 37–60). Dordrecht, the Netherlands: Springer.

Stahl, E., & Bromme, R. (2007). The CAEB: An instrument for measuring connotative aspects of epistemological beliefs. Learning and Instruction , 17 , 773–785.

Stanovich, K. E., & Stanovich, P. J. (2010). A framework for critical thinking, rational thinking, and intelligence. In D. Preiss & R. J. Sternberg (Eds.), Innovations in educational psychology: Perspectives on learning, teaching and human development (pp. 195–237). New York, NY: Springer.

Thompson, A. G. (1992). Teachers’ beliefs and conceptions: A synthesis of the research. In D. A. Grouws (Ed.), Handbook of research on mathematic learning and teaching (pp. 127–146). New York, NY: Macmillan.

Toulmin, S. (2003). The uses of argument, updated edition . Cambridge, UK: Cambridge University Press.

Trautwein, U., & Lüdtke, O. (2007). Epistemological beliefs, school achievement, and college major: A large-scale longitudinal study on the impact of certainty beliefs. Contemporary Educational Psychology , 32 , 348–366.

Tsai, C.-C. (1998a). An analysis of scientific epistemological beliefs and learning orientations of Taiwanese eight graders. Science Education , 82 (4), 473–489.

Tsai, C.-C. (1998b). An analysis of Taiwanese eighth graders’ science achievement, scientific epistemological beliefs and cognitive structure outcomes after learning basic atomic theory. International Journal of Science Education , 20 (4), 413–425.

Wiener, N. (1923). Collected works: With commentaries . Cambridge, MA: The MIT Press.

Download references

Acknowledgments

I would like to thank the editors Vilma Mesa and Wim Van Dooren as well as the reviewers for their constructive feedback on previous versions of this manuscript.

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and affiliations.

Faculty of Mathematics and Natural Sciences, University of Cologne, Gronewaldstr. 2, 50931, Cologne, Germany

Benjamin Rott

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Benjamin Rott .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

(PDF 406 kb)

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Rott, B. Inductive and deductive justification of knowledge: epistemological beliefs and critical thinking at the beginning of studying mathematics. Educ Stud Math 106 , 117–132 (2021). https://doi.org/10.1007/s10649-020-10004-1

Download citation

Accepted : 22 October 2020

Published : 14 November 2020

Issue Date : January 2021

DOI : https://doi.org/10.1007/s10649-020-10004-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Epistemological beliefs
  • Critical thinking
  • Mathematical competencies
  • Replication
  • Find a journal
  • Publish with us
  • Track your research
  • Key Differences

Know the Differences & Comparisons

Difference Between Inductive and Deductive Reasoning

inductve vs deductive reasoning

Conversely, deductive reasoning uses available information, facts or premises to arrive at a conclusion. These two logics are exactly opposite to each other. Still, they are often juxtaposed due to lack of adequate information. In this article, we are going to tell you the basic differences between inductive and deductive reasoning, which will help you to understand them better.

Content: Inductive Reasoning Vs Deductive Reasoning

Comparison chart, definition of inductive reasoning.

In research, inductive reasoning alludes to the logical process, in which specific instances or situations are observed or analysed to establish general principles. In this process, the multiple propositions are believed to provide strong evidence, for the truth of the conclusion. It is used to develop an understanding, on the basis of observing regularities, to ascertain how something works.

These are uncertain arguments; that describes the extent to which the conclusions drawn on the basis of premises, are credible.

In inductive reasoning, there are certain possibilities that the conclusion drawn can be false, even if the all the assumptions are true. The reasoning vests on experience and observations that support the apparent truth of the conclusion. Further, the argument can be strong or weak, as it only describes the likelihood of the inference, to be true.

Definition of Deductive Reasoning

Deductive Reasoning means a form of logic in which specific inferences are drawn from multiple premises (general statements). It establishes the relationship between the proposition and conclusion. When all the proposed statements are true, then the rules of deduction are applied and the result obtained is inevitably true.

Deductive logic is based on the fundamental law of reasoning, i.e. if X then Y. It implies the direct application of available information or facts, to come up with new information or facts. In this, the researcher takes into account a theory and generates a hypothesis, which can be tested, after that the observation are recorded, which leads to particular data, which is nothing but the confirmation of validity.

Key Differences Between Inductive and Deductive Reasoning

The points provided below, clarifies the difference between inductive and deductive reasoning in detail:

  • The argument in which the premises give reasons in support of the probable truth of the conjecture is inductive reasoning. The elementary form of valid reasoning, wherein the proposition provide the guarantee of the truth of conjecture, is deductive reasoning.
  • While inductive reasoning uses the bottom-up approach, deductive reasoning uses a top-down approach.
  • The initial point of inductive reasoning is the conclusion. On the other hand, deductive reasoning starts with premises.
  • The basis of inductive reasoning is behaviour or pattern. Conversely, deductive reasoning depends on facts and rules.
  • Inductive reasoning begins with a small observation, that determines the pattern and develops a theory by working on related issues and establish the hypothesis. In contrast, deductive reasoning begins with a general statement, i.e. theory which is turned to the hypothesis, and then some evidence or observations are examined to reach the final conclusion.
  • In inductive reasoning, the argument supporting the conclusion, may or may not be strong. On the contrary, in deductive reasoning, the argument can be proved valid or invalid.
  • Inductive reasoning moves from specific to general. Unlike, deductive reasoning moves from general to particular.
  • In inductive reasoning, the inferences drawn are probabilistic. As opposed, in deductive reasoning, the generalisation made are necessarily true, if the premises are correct.

Video: Inductive Vs Deductive Reasoning

To sum up, inductive and deductive reasoning are the two kinds of logic, which are used in the field of research to develop the hypothesis, so as to arrive at a conclusion, on the basis of information, which is believed to be true. Inductive reasoning considers events for making the generalization. In contrast, deductive reasoning takes general statements as a base to arrive at an particular conclusion.

You Might Also Like:

inductive and deductive reasoning in critical thinking

December 5, 2018 at 4:33 pm

Explained with real clarity and concision. The best presented, most succinct, differentiation between these two divergent forms of logic/reasoning that I have read. A well deserved pat on the back to the author and due credit to any earlier authors from whom this work was derived.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Open access
  • Published: 02 May 2024

Use of the International IFOMPT Cervical Framework to inform clinical reasoning in postgraduate level physiotherapy students: a qualitative study using think aloud methodology

  • Katie L. Kowalski 1 ,
  • Heather Gillis 1 ,
  • Katherine Henning 1 ,
  • Paul Parikh 1 ,
  • Jackie Sadi 1 &
  • Alison Rushton 1  

BMC Medical Education volume  24 , Article number:  486 ( 2024 ) Cite this article

159 Accesses

Metrics details

Vascular pathologies of the head and neck are rare but can present as musculoskeletal problems. The International Federation of Orthopedic Manipulative Physical Therapists (IFOMPT) Cervical Framework (Framework) aims to assist evidence-based clinical reasoning for safe assessment and management of the cervical spine considering potential for vascular pathology. Clinical reasoning is critical to physiotherapy, and developing high-level clinical reasoning is a priority for postgraduate (post-licensure) educational programs.

To explore the influence of the Framework on clinical reasoning processes in postgraduate physiotherapy students.

Qualitative case study design using think aloud methodology and interpretive description, informed by COnsolidated criteria for REporting Qualitative research. Participants were postgraduate musculoskeletal physiotherapy students who learned about the Framework through standardized delivery. Two cervical spine cases explored clinical reasoning processes. Coding and analysis of transcripts were guided by Elstein’s diagnostic reasoning components and the Postgraduate Musculoskeletal Physiotherapy Practice model. Data were analyzed using thematic analysis (inductive and deductive) for individuals and then across participants, enabling analysis of key steps in clinical reasoning processes and use of the Framework. Trustworthiness was enhanced with multiple strategies (e.g., second researcher challenged codes).

For all participants ( n  = 8), the Framework supported clinical reasoning using primarily hypothetico-deductive processes. It informed vascular hypothesis generation in the patient history and testing the vascular hypothesis through patient history questions and selection of physical examination tests, to inform clarity and support for diagnosis and management. Most participant’s clinical reasoning processes were characterized by high-level features (e.g., prioritization), however there was a continuum of proficiency. Clinical reasoning processes were informed by deep knowledge of the Framework integrated with a breadth of wider knowledge and supported by a range of personal characteristics (e.g., reflection).

Conclusions

Findings support use of the Framework as an educational resource in postgraduate physiotherapy programs to inform clinical reasoning processes for safe and effective assessment and management of cervical spine presentations considering potential for vascular pathology. Individualized approaches may be required to support students, owing to a continuum of clinical reasoning proficiency. Future research is required to explore use of the Framework to inform clinical reasoning processes in learners at different levels.

Peer Review reports

Introduction

Musculoskeletal neck pain and headache are highly prevalent and among the most disabling conditions globally that require effective rehabilitation [ 1 , 2 , 3 , 4 ]. A range of rehabilitation professionals, including physiotherapists, assess and manage musculoskeletal neck pain and headache. Assessment of the cervical spine can be a complex process. Patients can present to physiotherapy with vascular pathology masquerading as musculoskeletal pain and dysfunction, as neck pain and/or headache as a common first symptom [ 5 ]. While vascular pathologies of the head and neck are rare [ 6 ], they are important considerations within a cervical spine assessment to facilitate the best possible patient outcomes [ 7 ]. The International IFOMPT (International Federation of Orthopedic Manipulative Physical Therapists) Cervical Framework (Framework) provides guidance in the assessment and management of the cervical spine region, considering the potential for vascular pathologies of the neck and head [ 8 ]. Two separate, but related, risks are considered: risk of misdiagnosis of an existing vascular pathology and risk of serious adverse event following musculoskeletal interventions [ 8 ].

The Framework is a consensus document iteratively developed through rigorous methods and the best contemporary evidence [ 8 ], and is also published as a Position Statement [ 7 ]. Central to the Framework are clinical reasoning and evidence-based practice, providing guidance in the assessment of the cervical spine region, considering the potential for vascular pathologies in advance of planned interventions [ 7 , 8 ]. The Framework was developed and published to be a resource for practicing musculoskeletal clinicians and educators. It has been implemented widely within IFOMPT postgraduate (post-licensure) educational programs, influencing curricula by enabling a comprehensive and systemic approach when considering the potential for vascular pathology [ 9 ]. Frequently reported curricula changes include an emphasis on the patient history and incorporating Framework recommended physical examination tests to evaluate a vascular hypothesis [ 9 ]. The Framework aims to assist musculoskeletal clinicians in their clinical reasoning processes, however no study has investigated students’ use of the Framework to inform their clinical reasoning.

Clinical reasoning is a critical component to physiotherapy practice as it is fundamental to assessment and diagnosis, enabling physiotherapists to provide safe and effective patient-centered care [ 10 ]. This is particularly important for postgraduate physiotherapy educational programs, where developing a high level of clinical reasoning is a priority for educational curricula [ 11 ] and critical for achieving advanced practice physiotherapy competency [ 12 , 13 , 14 , 15 ]. At this level of physiotherapy, diagnostic reasoning is emphasized as an important component of a high level of clinical reasoning, informed by advanced use of domain-specific knowledge (e.g., propositional, experiential) and supported by a range of personal characteristics (e.g., adaptability, reflective) [ 12 ]. Facilitating the development of clinical reasoning improves physiotherapist’s performance and patient outcomes [ 16 ], underscoring the importance of clinical reasoning to physiotherapy practice. Understanding students’ use of the Framework to inform their clinical reasoning can support optimal implementation of the Framework within educational programs to facilitate safe and effective assessment and management of the cervical spine for patients.

To explore the influence of the Framework on the clinical reasoning processes in postgraduate level physiotherapy students.

Using a qualitative case study design, think aloud case analyses enabled exploration of clinical reasoning processes in postgraduate physiotherapy students. Case study design allows evaluation of experiences in practice, providing knowledge and accounts of practical actions in a specific context [ 17 ]. Case studies offer opportunity to generate situationally dependent understandings of accounts of clinical practice, highlighting the action and interaction that underscore the complexity of clinical decision-making in practice [ 17 ]. This study was informed by an interpretive description methodological approach with thematic analysis [ 18 , 19 ]. Interpretive description is coherent with mixed methods research and pragmatic orientations [ 20 , 21 ], and enables generation of evidence-based disciplinary knowledge and clinical understanding to inform practice [ 18 , 19 , 22 ]. Interpretive description has evolved for use in educational research to generate knowledge of educational experiences and the complexities of health care education to support achievement of educational objectives and professional practice standards [ 23 ]. The COnsolidated criteria for REporting Qualitative research (COREQ) informed the design and reporting of this study [ 24 ].

Research team

All research team members hold physiotherapy qualifications, and most hold advanced qualifications specializing in musculoskeletal physiotherapy. The research team is based in Canada and has varying levels of academic credentials (ranging from Clinical Masters to PhD or equivalent) and occupations (ranging from PhD student to Director of Physical Therapy). The final author (AR) is also an author of the Framework, which represents international and multiprofessional consensus. Authors HG and JS are lecturers on one of the postgraduate programs which students were recruited from. The primary researcher and first author (KK) is a US-trained Physical Therapist and Postdoctoral Research Associate investigating spinal pain and clinical reasoning in the School of Physical Therapy at Western University. Authors KK, KH and PP had no prior relationship with the postgraduate educational programs, students, or the Framework.

Study setting

Western University in London, Ontario, Canada offers a one-year Advanced Health Care Practice (AHCP) postgraduate IFOMPT-approved Comprehensive Musculoskeletal Physiotherapy program (CMP) and a postgraduate Sport and Exercise Medicine (SEM) program. Think aloud case analyses interviews were conducted using Zoom, a viable option for qualitative data collection and audio-video recording of interviews that enables participation for students who live in geographically dispersed areas across Canada [ 25 ]. Interviews with individual participants were conducted by one researcher (KK or KH) in a calm and quiet environment to minimize disruption to the process of thinking aloud [ 26 ].

Participants

AHCP postgraduate musculoskeletal physiotherapy students ≥ 18 years of age in the CMP and SEM programs were recruited via email and an introduction to the research study during class by KK, using purposive sampling to ensure theoretical representation. The purposive sample ensured key characteristics of participants were included, specifically gender, ethnicity, and physiotherapy experience (years, type). AHCP students must have attended standardized teaching about the Framework to be eligible to participate. Exclusion criteria included inability to communicate fluently in English. As think-aloud methodology seeks rich, in-depth data from a small sample [ 27 ], this study sought to recruit 8–10 AHCP students. This range was informed by prior think aloud literature and anticipated to balance diversity of participant characteristics, similarities in musculoskeletal physiotherapy domain knowledge and rich data supporting individual clinical reasoning processes [ 27 , 28 ].

Learning about the IFOMPT Cervical Framework

CMP and SEM programs included standardized teaching of the Framework to inform AHCP students’ clinical reasoning in practice. Delivery included a presentation explaining the Framework, access to the full Framework document [ 8 ], and discussion of its role to inform practice, including a case analysis of a cervical spine clinical presentation, by research team members AR and JS. The full Framework document that is publicly available through IFOMPT [ 8 ] was provided to AHCP students as the Framework Position Statement [ 7 ] was not yet published. Discussion and case analysis was led by AHCP program leads in November 2021 (CMP, including research team member JS) and January 2022 (SEM).

Think aloud case analyses data collection

Using think aloud methodology, the analytical processes of how participants use the Framework to inform clinical reasoning were explored in an interview with one research team member not involved in AHCP educational programs (KK or KH). The think aloud method enables description and explanation of complex information paralleling the clinical reasoning process and has been used previously in musculoskeletal physiotherapy [ 29 , 30 ]. It facilitates the generation of rich verbal [ 27 ]as participants verbalize their clinical reasoning protocols [ 27 , 31 ]. Participants were aware of the aim of the research study and the research team’s clinical and research backgrounds, supporting an open environment for depth of data collection [ 32 ]. There was no prior relationship between participants and research team members conducting interviews.

Participants were instructed to think aloud their analysis of two clinical cases, presented in random order (Supplementary  1 ). Case information was provided in stages to reflect the chronology of assessment of patients in practice (patient history, planning the physical examination, physical examination, treatment). Use of the Framework to inform clinical reasoning was discussed at each stage. The cases enabled participants to identify and discuss features of possible vascular pathology, treatment indications and contraindications/precautions, etc. Two research study team members (HG, PP) developed cases designed to facilitate and elicit clinical reasoning processes in neck and head pain presentations. Cases were tested against the research team to ensure face validity. Cases and think aloud prompts were piloted prior to use with three physiotherapists at varying levels of practice to ensure they were fit for purpose.

Data collection took place from March 30-August 15, 2022, during the final terms of the AHCP programs and an average of 5 months after standardized teaching about the Framework. During case analysis interviews, participants were instructed to constantly think aloud, and if a pause in verbalizations was sustained, they were reminded to “keep thinking aloud” [ 27 ]. As needed, prompts were given to elicit verbalization of participants’ reasoning processes, including use of the Framework to inform their clinical reasoning at each stage of case analysis (Supplementary  2 ). Aside from this, all interactions between participants and researchers minimized to not interfere with the participant’s thought processes [ 27 , 31 ]. When analysis of the first case was complete, the researcher provided the second case, each lasting 35–45 min. A break between cases was offered. During and after interviews, field notes were recorded about initial impressions of the data collection session and potential patterns appearing to emerge [ 33 ].

Data analysis

Data from think aloud interviews were analyzed using thematic analysis [ 30 , 34 ], facilitating identification and analysis of patterns in data and key steps in the clinical reasoning process, including use of the Framework to enable its characterization (Fig.  1 ). As established models of clinical reasoning exist, a hybrid approach to thematic analysis was employed, incorporating inductive and deductive processes [ 35 ], which proceeded according to 5 iterative steps: [ 34 ]

figure 1

Data analysis steps

Familiarize with data: Audio-visual recordings were transcribed verbatim by a physiotherapist external to the research team. All transcripts were read and re-read several times by one researcher (KK), checking for accuracy by reviewing recordings as required. Field notes supported depth of familiarization with data.

Generate initial codes: Line-by-line coding of transcripts by one researcher (KK) supported generation of initial codes that represented components, patterns and meaning in clinical reasoning processes and use of the Framework. Established preliminary coding models were used as a guide. Elstein’s diagnostic reasoning model [ 36 ] guided generating initial codes of key steps in clinical reasoning processes (Table  1 a) [ 29 , 36 ]. Leveraging richness of data, further codes were generated guided by the Postgraduate Musculoskeletal Physiotherapy Practice model, which describes masters level clinical practice (Table  1 b) [ 12 ]. Codes were refined as data analysis proceeded. All codes were collated within participants along with supporting data.

Generate initial themes within participants: Coded data was inductively grouped into initial themes within each participant, reflecting individual clinical reasoning processes and use of the Framework. This inductive stage enabled a systematic, flexible approach to describe each participant’s unique thinking path, offering insight into the complexities of their clinical reasoning processes. It also provided a comprehensive understanding of the Framework informing clinical reasoning and a rich characterization of its components, aiding the development of robust, nuanced insights [ 35 , 37 , 38 ]. Initial themes were repeatedly revised to ensure they were grounded in and reflected raw data.

Develop, review and refine themes across participants: Initial themes were synthesized across participants to develop themes that represented all participants. Themes were reviewed and refined, returning to initial themes and codes at the individual participant level as needed.

Organize themes into established models: Themes were deductively organized into established clinical reasoning models; first into Elstein’s diagnostic reasoning model, second into the Postgraduate Musculoskeletal Physiotherapy Practice model to characterize themes within each diagnostic reasoning component [ 12 , 36 ].

Trustworthiness of findings

The research study was conducted according to an a priori protocol and additional steps were taken to establish trustworthiness of findings [ 39 ]. Field notes supported deep familiarization with data and served as a means of data source triangulation during analysis [ 40 ]. One researcher coded transcripts and a second researcher challenged codes, with codes and themes rigorously and iteratively reviewed and refined. Frequent debriefing sessions with the research team, reflexive discussions with other researchers and peer scrutiny of initial findings enabled wider perspectives and experiences to shape analysis and interpretation of findings. Several strategies were implemented to minimize the influence of prior relationships between participants and researchers, including author KK recruiting participants, KK and KH collecting/analyzing data, and AR, JS, HG and PP providing input on de-identified data at the stage of synthesis and interpretation.

Nine AHCP postgraduate level students were recruited and participated in data collection. One participant was withdrawn because of unfamiliarity with the standardized teaching session about use of the Framework (no recall of session), despite confirmation of attendance. Data from eight participants were used for analysis (CMP: n  = 6; SEM: n  = 2; Table  2 ), which achieved sample size requirements for think aloud methodology of rich and in-depth data [ 27 , 28 ].

Diagnostic reasoning components

Informed by the Framework, all components of Elstein’s diagnostic reasoning processes [ 36 ] were used by participants, including use of treatment with physiotherapy interventions to aid diagnostic reasoning. An illustrative example is presented in Supplement  3 . Clinical reasoning used primarily hypothetico-deductive processes reflecting a continuum of proficiency, was informed by deep Framework knowledge and breadth of prior knowledge (e.g., experiential), and supported by a range of personal characteristics (e.g., justification for decisions).

Cue acquisition

All participants sought to acquire additional cues early in the patient history, and for some this persisted into the medical history and physical examination. Cue acquisition enabled depth and breadth of understanding patient history information to generate hypotheses and factors contributing to the patient’s pain experience (Table  3 ). All participants asked further questions to understand details of the patients’ pain and their presentation, while some also explored the impact of pain on patient functioning and treatments received to date. There was a high degree of specificity to questions for most participants. Ongoing clinical reasoning processes through a thorough and complete assessment, even if the patient had previously received treatment for similar symptoms, was important for some participants. Cue acquisition was supported by personal characteristics including a patient-centered approach (e.g., understanding the patient’s beliefs about pain) and one participant reflected on their approach to acquiring patient history cues.

Hypothesis generation

Participants generated an average of 4.5 hypotheses per case (range: 2–8) and most hypotheses (77%) were generated rapidly early in the patient history. Knowledge from the Framework about patient history features of vascular pathology informed vascular hypothesis generation in the patient history for all participants in both cases (Table  4 ). Vascular hypotheses were also generated during the past medical history, where risk factors for vascular pathology were identified and interpreted by some participants who had high levels of suspicion for cervical articular involvement. Non-vascular hypotheses were generated during the physical examination by some participants to explain individual physical examination or patient history cues. Deep knowledge of the patient history section in the Framework supported high level of cue identification and interpretation for generating vascular hypotheses. Initial hypotheses were prioritized by some participants, however the level of specificity of hypotheses varied.

Cue evaluation

All participants evaluated cues throughout the patient history and physical examination in relationship to hypotheses generated, indicating use of hypothetico-deductive reasoning processes (Table  5 ). Framework knowledge of patient history features of vascular pathology was used to test vascular hypotheses and aid differential diagnosis. The patient history section supported high level of cue identification and interpretation of patient history features for all but one participant, and generation of further patient history questions for all participants. The level of specificity of these questions was high for all but one participant. Framework knowledge of recommended physical examination tests, including removal of positional testing, supported planning a focused and prioritized physical examination to further test vascular hypotheses for all participants. No participant indicated intention to use positional testing as part of their physical examination. Treatment with physiotherapy interventions served as a form of cue evaluation, and cues were evaluated to inform prognosis for some participants. At times during the physical examination, some participants demonstrated occasional errors or difficulty with cue evaluation by omitting key physical exam tests (e.g., no cranial nerve assessment despite concerns for trigeminal nerve involvement), selecting physical exam tests in advance of hypothesis generation (e.g., cervical spine instability testing), difficulty interpreting cues, or late selection of a physical examination test. Cue acquisition was supported by a range of personal characteristics. Most participants justified selection of physical examination tests, and some self-reflected on their ability to collect useful physical examination information to inform selection of tests. Precaution to the physical examination was identified by all participants but one, which contributed to an adaptable approach, prioritizing patient safety and comfort. Critical analysis of physical examination information aided interpretation within the context of the patient for most participants.

Hypothesis evaluation

All participants used the Framework to evaluate their hypotheses throughout the patient history and physical examination, continuously shifting their level of support for hypotheses (Table  6 , Supplement  4 ). This informed clarity in the overall level of suspicion for vascular pathology or musculoskeletal diagnoses, which were specific for most participants. Response to treatment with physiotherapy interventions served as a form of hypothesis evaluation for most participants who had low level suspicion for vascular pathology, highlighting ongoing reasoning processes. Hypotheses evaluated were prioritized by ranking according to level of suspicion by some participants. Difficulties weighing patient history and physical examination cues to inform judgement on overall level of suspicion for vascular pathology was demonstrated by some participants who reported that incomplete physical examination data and not being able to see the patient contributed to difficulties. Hypothesis evaluation was supported by the personal characteristic of reflection, where some students reflected on the Framework’s emphasis on the patient history to evaluate a vascular hypothesis.

The Framework supported all participants in clinical reasoning related to treatment (Table  7 ). Treatment decisions were always linked to the participant’s overall level of suspicion for vascular pathology or musculoskeletal diagnosis. Framework knowledge supported participants with high level of suspicion for vascular pathology to refer for further investigations. Participants with a musculoskeletal diagnosis kept the patient for physiotherapy interventions. The Framework patient history section supported patient education about symptoms of vascular pathology and safety netting for some participants. Framework knowledge influenced informed consent processes and risk-benefit analysis to support the selection of musculoskeletal physiotherapy interventions, which were specific and prioritized for some participants. Less Framework knowledge related to treatment was demonstrated by some students, generating unclear recommendations regarding the urgency of referral and use of the Framework to inform musculoskeletal physiotherapy interventions. Treatment was supported by a range of personal characteristics. An adaptable approach that prioritized patient safety and was supported by justification was demonstrated in all participants except one. Shared decision-making enabled the selection of physiotherapy interventions, which were patient-centered (individualized, considered whole person, identified future risk for vascular pathology). Communication with the patient’s family doctor facilitated collaborative patient-centered care for most participants.

This is the first study to explore the influence of the Framework on clinical reasoning processes in postgraduate physiotherapy students. The Framework supported clinical reasoning that used primarily hypothetico-deductive processes. The Framework informed vascular hypothesis generation in the patient history and testing the vascular hypothesis through patient history questions and selection of physical examination tests to inform clarity and support for diagnosis and management. Most postgraduate students’ clinical reasoning processes were characterized by high-level features (e.g. specificity, prioritization). However, some demonstrated occasional difficulties or errors, reflecting a continuum of clinical reasoning proficiency. Clinical reasoning processes were informed by deep knowledge of the Framework integrated with a breadth of wider knowledge and supported by a range of personal characteristics (e.g., justification for decisions, reflection).

Use of the Framework to inform clinical reasoning processes

The Framework provided a structured and comprehensive approach to support postgraduate students’ clinical reasoning processes in assessment and management of the cervical spine region, considering the potential for vascular pathology. Patient history and physical examination information was evaluated to inform clarity and support the decision to refer for further vascular investigations or proceed with musculoskeletal physiotherapy diagnosis/interventions. The Framework is not intended to lead to a vascular pathology diagnosis [ 7 , 8 ], and following the Framework does not guarantee vascular pathologies will be identified [ 41 ]. Rather, it aims to support a process of clinical reasoning to elicit and interpret appropriate patient history and physical examination information to estimate the probability of vascular pathology and inform judgement about the need to refer for further investigations [ 7 , 8 , 42 ]. Results of this study suggest the Framework has achieved this aim for postgraduate physiotherapy students.

The Framework supported postgraduate students in using primarily hypothetico-deductive diagnostic reasoning processes. This is expected given the diversity of vascular pathology clinical presentations precluding a definite clinical pattern and inherent complexity as a potential masquerader of a musculoskeletal problem [ 7 ]. It is also consistent with prior research investigating clinical reasoning processes in musculoskeletal physiotherapy postgraduate students [ 12 ] and clinical experts [ 29 ] where hypothetico-deductive and pattern recognition diagnostic reasoning are employed according to the demands of the clinical situation [ 10 ]. Diagnostic reasoning of most postgraduate students in this study demonstrated features suggestive of high-level clinical reasoning in musculoskeletal physiotherapy [ 12 ], including ongoing reasoning with high-level cue identification and interpretation, specificity and prioritization during assessment and treatment, use of physiotherapy interventions to aid diagnostic reasoning, and prognosis determination [ 12 , 29 , 43 ]. Expert physiotherapy practice has been further described as using a dialectical model of clinical reasoning with seamless transitions between clinical reasoning strategies [ 44 ]. While diagnostic reasoning was a focus in this study, postgraduate students considered a breadth of information as important to their reasoning (e.g., patient’s perspectives of the reason for their pain). This suggests wider reasoning strategies (e.g., narrative, collaborative) were employed to enable shared decision-making within the context of patient-centered care.

Study findings also highlighted a continuum of proficiency in use of the Framework to inform clinical reasoning processes. Not all students demonstrated all characteristics of high-level clinical reasoning and there are suggestions of incomplete reasoning processes, for example occasional errors in evaluating cues. Some students offered explanations such as incomplete case information as factors contributing to difficulties with clinical reasoning processes. However, the ability to critically evaluate incomplete and potentially conflicting clinical information is consistently identified as an advanced clinical practice competency [ 14 , 43 ]. A continuum of proficiency in clinical reasoning in musculoskeletal physiotherapy is supported by wider healthcare professions describing acquisition and application of clinical knowledge and skills as a developmental continuum of clinical competence progressing from novice to expert [ 45 , 46 ]. The range of years of clinical practice experience in this cohort of students (3–14 years) or prior completed postgraduate education may have contributed to the continuum of proficiency, as high-quality and diverse experiential learning is essential for the development of high-level clinical reasoning [ 14 , 47 ].

Deep knowledge of the Framework informs clinical reasoning processes

Postgraduate students demonstrated deep Framework knowledge to inform clinical reasoning processes. All students demonstrated knowledge of patient history features of vascular pathology, recommended physical examination tests to test a vascular hypothesis, and the need to refer if there is a high level of suspicion for vascular pathology. A key development in the recent Framework update is the removal of the recommendation to perform positional testing [ 8 ]. All students demonstrated knowledge of this development, and none wanted to test a vascular hypothesis with positional testing. Most also demonstrated Framework knowledge about considerations for planning treatment with physiotherapy interventions (e.g., risk-benefit analysis, informed consent), though not all, which underscores the continuum of proficiency in postgraduate students. Rich organization of multidimensional knowledge is a required component for high level clinical reasoning and is characteristic of expert physiotherapy practice [ 10 , 48 , 49 ]. Most postgraduate physiotherapy students displayed this expert practice characteristic through integration of deep Framework knowledge with a breadth of prior knowledge (e.g., experiential, propositional) to inform clinical reasoning processes. This highlights the utility of the Framework in postgraduate physiotherapy education to develop advanced level evidence-based knowledge informing clinical reasoning processes for safe assessment and management of the cervical spine, considering the potential for vascular pathology [ 9 , 8 , 50 , 51 , 52 ].

Framework supports personal characteristics to facilitate integration of knowledge and clinical reasoning

The Framework supported personal characteristics of postgraduate students, which are key drivers for the complex integration of advanced knowledge and high-level clinical reasoning [ 10 , 12 , 48 ]. For all students, the Framework supported justification for decisions and patient-centered care, emphasizing a whole-person approach and shared decision-making. Further demonstrating a continuum of proficiency, the Framework supported a wider breadth of personal characteristics for some students, including critical analysis, reflection, self-analysis, and adaptability. These personal characteristics illustrate the interwoven cognitive and metacognitive skills that influence and support a high level of clinical reasoning [ 10 , 12 ] and the development of clinical expertise [ 48 , 53 ]. For example [ 54 ], reflection is critical to developing high-level clinical reasoning and advanced level practice [ 12 , 55 ]. Postgraduate students reflected on prior knowledge, experiences, and action within the context of current Framework knowledge, emphasizing active engagement in cognitive processes to inform clinical reasoning processes. Reflection-in-action is highlighted by self-analysis and adaptability. These characteristics require continuous cognitive processing to consider personal strengths and limitations in the context of the patient and evidence-based practice, adapting the clinical encounter as required [ 53 , 55 ]. These findings highlight use of the Framework in postgraduate education to support development of personal characteristics that are indicative of an advanced level of clinical practice [ 12 ].

Synthesis of findings

Derived from synthesis of research study findings and informed by the Postgraduate Musculoskeletal Physiotherapy Practice model [ 12 ], use of the Framework to inform clinical reasoning processes in postgraduate students is illustrated in Fig.  2 . Overlapping clinical reasoning, knowledge and personal characteristic components emphasize the complex interaction of factors contributing to clinical reasoning processes. Personal characteristics of postgraduate students underpin clinical reasoning and knowledge, highlighting their role in facilitating the integration of these two components. Bolded subcomponents indicate convergence of results reflecting all postgraduate students and underscores the variability among postgraduate students contributing to a continuum of clinical reasoning proficiency. The relative weighting of the components is approximately equal to balance the breadth and convergence of subcomponents. Synthesis of findings align with the Postgraduate Musculoskeletal Physiotherapy Practice model [ 12 ], though some differences exist. Limited personal characteristics were identified in this study with little convergence across students, which may be due to the objective of this study and the case analysis approach.

figure 2

Use of the Framework to inform clinical reasoning in postgraduate level musculoskeletal physiotherapy students. Adapted from the Postgraduate Musculoskeletal Physiotherapy Practice model [ 12 ].

Strengths and limitations

Think aloud case analyses enabled situationally dependent understanding of the Framework to inform clinical reasoning processes in postgraduate level students [ 17 ], considering the rare potential for vascular pathology. A limitation of this approach was the standardized nature of case information provided to students, which may have influenced clinical reasoning processes. Future research studies may consider patient case simulation to address this limitation [ 30 ]. Interviews were conducted during the second half of the postgraduate educational program, and this timing could have influenced clinical reasoning processes compared to if interviews were conducted at the end of the program. Future research can explore use of the Framework to inform clinical reasoning processes in established advanced practice physiotherapists. The sample size of this study aligns with recommendations for think aloud methodology [ 27 , 28 ], achieved rich data, and purposive sampling enabled wide representation of key characteristics (e.g., gender, ethnicity, country of training, physiotherapy experiences), which enhances transferability of findings. Students were aware of the study objective in advance of interviews which may have contributed to a heightened level of awareness of vascular pathology. The prior relationship between students and researchers may have also influenced results, however several strategies were implemented to minimize this influence.

Implications

The Framework is widely implemented within IFOMPT postgraduate educational programs and has led to important shifts in educational curricula [ 9 ]. Findings of this study support use of the Framework as an educational resource in postgraduate physiotherapy programs to inform clinical reasoning processes for safe and effective assessment and management of cervical spine presentations considering the potential for vascular pathology. Individualized approaches may be required to support each student, owing to a continuum of clinical reasoning proficiency. As the Framework was written for practicing musculoskeletal clinicians, future research is required to explore use of the Framework to inform clinical reasoning in learners at different levels, for example entry-level physiotherapy students.

The Framework supported clinical reasoning that used primarily hypothetico-deductive processes in postgraduate physiotherapy students. It informed vascular hypothesis generation in the patient history and testing the vascular hypothesis through patient history questions and selection of physical examination tests, to inform clarity and support for diagnosis and management. Most postgraduate students clinical reasoning processes were characterized as high-level, informed by deep Framework knowledge integrated with a breadth of wider knowledge, and supported by a range of personal characteristics to facilitate the integration of advanced knowledge and high-level clinical reasoning. Future research is required to explore use of the Framework to inform clinical reasoning in learners at different levels.

Data availability

The dataset used and analyzed during the current study are available from the corresponding author on reasonable request.

Safiri S, Kolahi AA, Hoy D, Buchbinder R, Mansournia MA, Bettampadi D et al. Global, regional, and national burden of neck pain in the general population, 1990–2017: systematic analysis of the global burden of Disease Study 2017. BMJ. 2020;368.

Stovner LJ, Nichols E, Steiner TJ, Abd-Allah F, Abdelalim A, Al-Raddadi RM, et al. Global, regional, and national burden of migraine and tension-type headache, 1990–2016: a systematic analysis for the global burden of Disease Study 2016. Lancet Neurol. 2018;17:954–76.

Article   Google Scholar  

Cieza A, Causey K, Kamenov K, Hanson SW, Chatterji S, Vos T. Global estimates of the need for rehabilitation based on the Global Burden of Disease study 2019: a systematic analysis for the global burden of Disease Study 2019. Lancet. 2020;396:2006–17.

Côté P, Yu H, Shearer HM, Randhawa K, Wong JJ, Mior S et al. Non-pharmacological management of persistent headaches associated with neck pain: A clinical practice guideline from the Ontario protocol for traffic injury management (OPTIMa) collaboration. European Journal of Pain (United Kingdom). 2019;23.

Diamanti S, Longoni M, Agostoni EC. Leading symptoms in cerebrovascular diseases: what about headache? Neurological Sciences. 2019.

Debette S, Compter A, Labeyrie MA, Uyttenboogaart M, Metso TM, Majersik JJ, et al. Epidemiology, pathophysiology, diagnosis, and management of intracranial artery dissection. Lancet Neurol. 2015;14:640–54.

Rushton A, Carlesso LC, Flynn T, Hing WA, Rubinstein SM, Vogel S, et al. International Framework for examination of the Cervical Region for potential of vascular pathologies of the Neck Prior to Musculoskeletal intervention: International IFOMPT Cervical Framework. J Orthop Sports Phys Therapy. 2023;53:7–22.

Rushton A, Carlesso LC, Flynn T, Hing WA, Kerry R, Rubinstein SM, et al. International framework for examination of the cervical region for potential of vascular pathologies of the neck prior to orthopaedic manual therapy (OMT) intervention: International IFOMPT Cervical Framework. International IFOMPT Cervical Framework; 2020.

Hutting N, Kranenburg R, Taylor A, Wilbrink W, Kerry R, Mourad F. Implementation of the International IFOMPT Cervical Framework: a survey among educational programmes. Musculoskelet Sci Pract. 2022;62:102619.

Jones MA, Jensen G, Edwards I. Clinical reasoning in physiotherapy. In: Campbell S, Watkins V, editors. Clinical reasoning in the health professions. Third. Philadelphia: Elsevier; 2008. pp. 245–56.

Google Scholar  

Fennelly O, Desmeules F, O’Sullivan C, Heneghan NR, Cunningham C. Advanced musculoskeletal physiotherapy practice: informing education curricula. Musculoskelet Sci Pract. 2020;48:102174.

Rushton A, Lindsay G. Defining the construct of masters level clinical practice in manipulative physiotherapy. Man Ther. 2010;15.

Rushton A, Lindsay G. Defining the construct of masters level clinical practice in healthcare based on the UK experience. Med Teach. 2008;30:e100–7.

Noblet T, Heneghan NR, Hindle J, Rushton A. Accreditation of advanced clinical practice of musculoskeletal physiotherapy in England: a qualitative two-phase study to inform implementation. Physiotherapy (United Kingdom). 2021;113.

Tawiah AK, Stokes E, Wieler M, Desmeules F, Finucane L, Lewis J, et al. Developing an international competency and capability framework for advanced practice physiotherapy: a scoping review with narrative synthesis. Physiotherapy. 2023;122:3–16.

Williams A, Rushton A, Lewis JJ, Phillips C. Evaluation of the clinical effectiveness of a work-based mentoring programme to develop clinical reasoning on patient outcome: a stepped wedge cluster randomised controlled trial. PLoS ONE. 2019;14.

Miles R. Complexity, representation and practice: case study as method and methodology. Issues Educational Res. 2015;25.

Thorne S, Kirkham SR, MacDonald-Emes J. Interpretive description: a noncategorical qualitative alternative for developing nursing knowledge. Res Nurs Health. 1997;20.

Thorne S, Kirkham SR, O’Flynn-Magee K. The Analytic challenge in interpretive description. Int J Qual Methods. 2004;3.

Creswell JW. Research design: qualitative, quantitative, and mixed methods approaches. Sage; 2003.

Dolan S, Nowell L, Moules NJ. Interpretive description in applied mixed methods research: exploring issues of fit, purpose, process, context, and design. Nurs Inq. 2023;30.

Thorne S. Interpretive description. In: Routledge International Handbook of Qualitative Nursing Research. 2013. pp. 295–306.

Thompson Burdine J, Thorne S, Sandhu G. Interpretive description: a flexible qualitative methodology for medical education research. Med Educ. 2021;55.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus group. Int J Qual Health Care. 2007;19:349–57.

Archibald MM, Ambagtsheer RC, Casey MG, Lawless M. Using zoom videoconferencing for qualitative data Collection: perceptions and experiences of researchers and participants. Int J Qual Methods. 2019;18.

Van Someren M, Barnard YF, Sandberg J. The think aloud method: a practical approach to modelling cognitive. Volume 11. London: Academic; 1994.

Fonteyn ME, Kuipers B, Grobe SJ. A description of think aloud Method and Protocol Analysis. Qual Health Res. 1993;3:430–41.

Lundgrén-Laine H, Salanterä S. Think-Aloud technique and protocol analysis in clinical decision-making research. Qual Health Res. 2010;20:565–75.

Doody C, McAteer M. Clinical reasoning of expert and novice physiotherapists in an outpatient orthopaedic setting. Physiotherapy. 2002;88.

Gilliland S. Physical therapist students’ development of diagnostic reasoning: a longitudinal study. J Phys Therapy Educ. 2017;31.

Ericsson KA, Simon HA. How to study thinking in Everyday Life: contrasting think-aloud protocols with descriptions and explanations of thinking. Mind Cult Act. 1998;5:178–86.

Dwyer SC, Buckle JL. The space between: on being an insider-outsider in qualitative research. Int J Qual Methods. 2009;8.

Shenton AK. Strategies for ensuring trustworthiness in qualitative research projects. Educ Inform. 2004;22:63–75.

Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77–101.

Fereday J, Muir-Cochrane E. Demonstrating Rigor using thematic analysis: a Hybrid Approach of Inductive and deductive coding and theme development. Int J Qual Methods. 2006;5.

Elstein ASLSS. Medical problem solving: an analysis of clinical reasoning. Harvard University Press; 1978.

Proudfoot K. Inductive/Deductive Hybrid Thematic Analysis in mixed methods research. J Mix Methods Res. 2023;17.

Charters E. The use of think-aloud methods in qualitative research an introduction to think-aloud methods. Brock Educ J. 2003;12.

Nowell LS, Norris JM, White DE, Moules NJ. Thematic analysis: striving to meet the trustworthiness Criteria. Int J Qual Methods. 2017;16:1–13.

Thurmond VA. The point of triangulation. J Nurs Scholarsh. 2001;33.

Hutting N, Wilbrink W, Taylor A, Kerry R. Identifying vascular pathologies or flow limitations: important aspects in the clinical reasoning process. Musculoskelet Sci Pract. 2021;53:102343.

de Best RF, Coppieters MW, van Trijffel E, Compter A, Uyttenboogaart M, Bot JC, et al. Risk assessment of vascular complications following manual therapy and exercise for the cervical region: diagnostic accuracy of the International Federation of Orthopaedic Manipulative physical therapists framework (the Go4Safe project). J Physiother. 2023;69:260–6.

Petty NJ. Becoming an expert: a masterclass in developing clinical expertise. Int J Osteopath Med. 2015;18:207–18.

Edwards I, Jones M, Carr J, Braunack-Mayer A, Jensen GM. Clinical reasoning strategies in physical therapy. Phys Ther. 2004;84.

Carraccio CL, Benson BJ, Nixon LJ, Derstine PL. Clinical teaching from the Educational Bench to the clinical Bedside: Translating the Dreyfus Developmental Model to the Learning of Clinical Skills.

Benner P. Using the Dreyfus Model of Skill Acquisition to describe and interpret Skill Acquisition and Clinical Judgment in nursing practice and education. Bull Sci Technol Soc. 2004;24:188–99.

Benner P. From novice to expert: Excellence and power in clinical nursing practice. Upper Saddle River, New Jersey: Prentice Hall;: Commemorative Ed; 2001.

Jensen GM, Gwyer J, Shepard KF, Hack LM. Expert practice in physical therapy. Phys Ther. 2000;80.

Huhn K, Gilliland SJ, Black LL, Wainwright SF, Christensen N. Clinical reasoning in physical therapy: a Concept Analysis. Phys Ther. 2019;99.

Hutting N, Kranenburg HA, Rik KR. Yes, we should abandon pre-treatment positional testing of the cervical spine. Musculoskelet Sci Pract. 2020;49:102181.

Kranenburg HA, Tyer R, Schmitt M, Luijckx GJ, Schans C, Van Der, Hutting N, et al. Effects of head and neck positions on blood flow in the vertebral, internal carotid, and intracranial arteries: a systematic review. J Orthop Sports Phys Ther. 2019;49:688–97.

Hutting N, Kerry R, Coppieters MW, Scholten-Peeters GGM. Considerations to improve the safety of cervical spine manual therapy. Musculoskelet Sci Pract. 2018;33.

Wainwright SF, Shepard KF, Harman LB, Stephens J. Novice and experienced physical therapist clinicians: a comparison of how reflection is used to inform the clinical decision-making process. Phys Ther. 2010;90:75–88.

Dy SM, Purnell TS. Key concepts relevant to quality of complex and shared decision-making in health care: a literature review. Soc Sci Med. 2012;74:582–7.

Christensen N, Jones MA, Higgs J, Edwards I. Dimensions of clinical reasoning capability. In: Campbell S, Watkins V, editors. Clinical reasoning in the health professions. 3rd edition. Philadelphia: Elsevier; 2008. pp. 101–10.

Download references

Acknowledgements

The authors would like to acknowledge study participants and the transcriptionist for their time in completing and transcribing think aloud interviews.

No funding was received to conduct this research study.

Author information

Authors and affiliations.

School of Physical Therapy, Western University, London, Ontario, Canada

Katie L. Kowalski, Heather Gillis, Katherine Henning, Paul Parikh, Jackie Sadi & Alison Rushton

You can also search for this author in PubMed   Google Scholar

Contributions

Katie Kowalski: Conceptualization, methodology, validation, formal analysis, investigation, data curation, writing– original draft, visualization, project administration. Heather Gillis: Validation, resources, writing– review & editing. Katherine Henning: Investigation, formal analysis, writing– review & editing. Paul Parikh: Validation, resources, writing– review & editing. Jackie Sadi: Validation, resources, writing– review & editing. Alison Rushton: Conceptualization, methodology, validation, writing– review & editing, supervision.

Corresponding author

Correspondence to Katie L. Kowalski .

Ethics declarations

Ethics approval and consent to participate.

Western University Health Science Research Ethics Board granted ethical approval (Project ID: 119934). Participants provided written informed consent prior to participating in think aloud interviews.

Consent for publication

Not applicable.

Competing interests

Author AR is an author of the IFOMPT Cervical Framework. Authors JS and HG are lecturers on the AHCP CMP program. AR and JS led standardized teaching of the Framework. Measures to reduce the influence of potential competing interests on the conduct and results of this study included: the Framework representing international and multiprofessional consensus, recruitment of participants by author KK, data collection and analysis completed by KK with input from AR, JS and HG at the stage of data synthesis and interpretation, and wider peer scrutiny of initial findings. KK, KH and PP have no potential competing interests.

Authors’ information

The lead author of this study (AR) is the first author of the International IFOMPT Cervical Framework.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, supplementary material 3, supplementary material 4, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Kowalski, K.L., Gillis, H., Henning, K. et al. Use of the International IFOMPT Cervical Framework to inform clinical reasoning in postgraduate level physiotherapy students: a qualitative study using think aloud methodology. BMC Med Educ 24 , 486 (2024). https://doi.org/10.1186/s12909-024-05399-x

Download citation

Received : 11 February 2024

Accepted : 08 April 2024

Published : 02 May 2024

DOI : https://doi.org/10.1186/s12909-024-05399-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • International IFOMPT Cervical Framework
  • Clinical reasoning
  • Postgraduate students
  • Physiotherapy
  • Educational research
  • Qualitative research
  • Think aloud methodology

BMC Medical Education

ISSN: 1472-6920

inductive and deductive reasoning in critical thinking

IMAGES

  1. Deductive Reasoning: Examples, Definition, Types and the difference

    inductive and deductive reasoning in critical thinking

  2. Inductive vs Deductive Reasoning (With Definitions & Examples)

    inductive and deductive reasoning in critical thinking

  3. What Is Deductive Reasoning? Definition, Examples & How To Use It

    inductive and deductive reasoning in critical thinking

  4. Logic (Critical Thinking) "Inductive and Deductive Reasoning"....4

    inductive and deductive reasoning in critical thinking

  5. 15 Inductive Reasoning Examples (2024)

    inductive and deductive reasoning in critical thinking

  6. What is the Difference Between Deductive and Inductive Reasoning

    inductive and deductive reasoning in critical thinking

VIDEO

  1. Inductive vs Deductive Reasoning

  2. Logical Reasoning

  3. || IMPORTANT PYQ SOLUTION|| SERIES 08 || SSC| BANK| RAILWAY||BY HIMANSHU SIR||

  4. ||REASONING PRACTICE SET 12|UPP

  5. Critical Thinking, part 2

  6. Inductive & Deductive Reasoning in Geometry

COMMENTS

  1. Guide To Inductive & Deductive Reasoning

    In fact, inductive reasoning usually comes much more naturally to us than deductive reasoning. Inductive reasoning moves from specific details and observations (typically of nature) to the more general underlying principles or process that explains them (e.g., Newton's Law of Gravity). It is open-ended and exploratory, especially at the beginning.

  2. Deductive reasoning vs inductive reasoning: A comparison

    In summary, understanding both deductive and inductive reasoning is important for developing critical thinking skills and making informed decisions. By recognizing their similarities and differences, we can apply the appropriate type of reasoning to various situations, thereby enhancing our logical thinking abilities and gaining a deeper ...

  3. Critical Thinking 2: Induction and deduction

    2. Also inductive reasoning, probably justified. 3. Deductive logic (an absolute claim) not well justified. No matter what theorythe Daily Mail supports 4. Also deductive and not well jusified. 5. Inductive argument which is not well jsutified, hard peices take longer to learn. I'll post this now, but will have another think about my answers.

  4. Inductive vs Deductive Reasoning

    The main difference between inductive and deductive reasoning is that inductive reasoning aims at developing a theory while deductive reasoning aims at testing an existing theory. Inductive reasoning moves from specific observations to broad generalisations, and deductive reasoning the other way around. Both approaches are used in various types ...

  5. 1.8: Deductive vs. Inductive Arguments

    A deductive argument is an argument whose conclusion is supposed to follow from its premises with absolute certainty, thus leaving no possibility that the conclusion doesn't follow from the premises. For a deductive argument to fail to do this is for it to fail as a deductive argument. In contrast, an inductive argument is an argument whose ...

  6. Inductive Reasoning: Definition, Examples, & Methods

    Critical thinking: Inductive reasoning requires you to analyze evidence, identify weaknesses, and consider alternative explanations. This helps you develop critical thinking skills, which are essential for making sound judgments (Shin, 2019).

  7. Inductive Reasoning

    Inductive vs. deductive reasoning. Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down. In deductive reasoning, you make inferences by going from general premises to specific conclusions. You start with a theory, and you might develop a hypothesis that you test empirically. You collect data from many observations ...

  8. Induction

    Inductive reasoning begins with observations that are specific and limited in scope, and proceeds to a generalized conclusion that is likely, but not certain, in light of accumulated evidence. You could say that inductive reasoning moves from the specific to the general. Much scientific research is carried out by the inductive method: gathering ...

  9. Critical Thinking

    An introduction to the basics of inductive reasoning, with a review of the differences between induction and deduction, as well as the major types of inducti...

  10. Inductive vs. Deductive Research Approach

    Revised on June 22, 2023. The main difference between inductive and deductive reasoning is that inductive reasoning aims at developing a theory while deductive reasoning aims at testing an existing theory. In other words, inductive reasoning moves from specific observations to broad generalizations. Deductive reasoning works the other way around.

  11. Inductive Reasoning vs. Deductive Reasoning (With Examples)

    Inductive and deductive reasoning is the logical thinking you use to come up with generalized or specific conclusions. Understanding inductive reasoning vs. deductive reasoning will help you develop critical thinking skills to think of solutions, ideas, and improvements while working.

  12. Inductive and Deductive Reasoning

    Reasoning, logic, and critical thinking are the building blocks of intellectual inquiry. This course will help develop your skills in these areas through problem-solving and exposure to a wide range of topics in mathematics. You'll learn the different techniques used in inductive and deductive reasoning and examine the roles each play in the field of mathematics. First you'll explore ...

  13. Deductive, Inductive and Abductive Reasoning

    Reasoning is the process of using existing knowledge to draw conclusions, make predictions, or construct explanations. Three methods of reasoning are the deductive, inductive, and abductive approaches. Deductive reasoning starts with the assertion of a general rule and proceeds from there to a guaranteed specific conclusion.

  14. What Is Deductive Reasoning?

    Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It's often contrasted with inductive reasoning, where you start with specific observations and form general conclusions. Deductive reasoning is also called deductive logic or top-down reasoning. Note. Deductive reasoning is often confused ...

  15. Inductive reasoning

    Purchase single chapter. Single Chapter PDF Download $42.00. Details. Unlimited viewing of the article/chapter PDF and any associated supplements and figures. Article/chapter can be printed. Article/chapter can be downloaded. Article/chapter can not be redistributed. Check out.

  16. Critical Thinking: Deductive and Inductive Arguments 1

    In this video, Dr. Sadler introduces his Fayetteville State University Critical Thinking class to the concepts of Deductive and Inductive arguments. He give...

  17. Reasoning processes in clinical reasoning: from the perspective of

    In this paper, two types of reasoning process required for critical thinking are discussed: inductive and deductive. Inductive and deductive reasoning processes have different features and are generally appropriate for different types of tasks. Numerous studies have suggested that experts tend to use inductive reasoning while novices tend to ...

  18. CRITICAL THINKING

    In this Wireless Philosophy video, Geoff Pynn (Northern Illinois) follows up on his introduction to critical thinking by exploring how deductive arguments gi...

  19. Inductive and deductive justification of knowledge: epistemological

    Inductive and deductive justification of knowledge: Flexible judgments underneath stable beliefs in teacher education. Mathematical Thinking and Learning, 18(4), 271-286. Article Google Scholar Rott, B., & Leuders, T. (2016b). Mathematical critical thinking: The construction and validation of a test.

  20. Inductive reasoning

    Inductive reasoning is any of various methods of reasoning in which broad generalizations or principles are derived from a body of observations. This article is concerned with the inductive reasoning other than deductive reasoning (such as mathematical induction), where the conclusion of a deductive argument is certain given the premises are correct; in contrast, the truth of the conclusion of ...

  21. Difference Between Inductive and Deductive Reasoning

    Conversely, deductive reasoning depends on facts and rules. Inductive reasoning begins with a small observation, that determines the pattern and develops a theory by working on related issues and establish the hypothesis. In contrast, deductive reasoning begins with a general statement, i.e. theory which is turned to the hypothesis, and then ...

  22. What Is Deductive Reasoning?

    What is deductive reasoning? Deductive reasoning is the process of applying broad rules, hypotheses, or truths, to specific situations to form conclusions that must follow logically. It's used to establish proofs, test hypotheses, and verify classifications. In daily life, we use deductive reasoning in problem-solving, decision-making, pattern recognition, and categorization to draw logical ...

  23. Inductive reasoning (docx)

    Inductive reasoning is a form of logical thinking involving generalizations based on specific observations or evidence. It is a fundamental aspect of our everyday lives, as we constantly draw conclusions and make decisions using this reasoning. However, the philosophical problem of induction arises when we consider whether these conclusions could be wrong or false.

  24. Use of the International IFOMPT Cervical Framework to inform clinical

    Clinical reasoning is critical to physiotherapy, and developing high-level clinical reasoning is a priority for postgraduate (post-licensure) educational programs. ... Data were analyzed using thematic analysis (inductive and deductive) for individuals and then across participants, enabling analysis of key steps in clinical reasoning processes ...

  25. Theories 4 2 .pptx

    Deductive Reasoning and Its Limitations Deductive reasoning is a logical method of reasoning in which conclusions are drawn from general principles or premises through valid logical inference. In deductive reasoning, if the premises are true and the reasoning process is valid, then the conclusion necessarily follows. This method is characterized by its certainty and rigor, making it a powerful ...