An official website of the United States government

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List

Critical Thinking in Critical Care: Five Strategies to Improve Teaching and Learning in the Intensive Care Unit

Margaret m hayes, souvik chatterjee, richard m schwartzstein.

  • Author information
  • Article notes
  • Copyright and License information

Correspondence and requests for reprints should be addressed to Margaret M. Hayes, M.D., Beth Israel Deaconess Medical Center/Harvard Medical School, Pulmonary and Critical Care Medicine, 330 Brookline Avenue E/ES 201B Boston, MA 02215. E-mail: [email protected]

Corresponding author.

Received 2016 Dec 15; Accepted 2017 Feb 1; Issue date 2017 Apr.

Critical thinking, the capacity to be deliberate about thinking, is increasingly the focus of undergraduate medical education, but is not commonly addressed in graduate medical education. Without critical thinking, physicians, and particularly residents, are prone to cognitive errors, which can lead to diagnostic errors, especially in a high-stakes environment such as the intensive care unit. Although challenging, critical thinking skills can be taught. At this time, there is a paucity of data to support an educational gold standard for teaching critical thinking, but we believe that five strategies, routed in cognitive theory and our personal teaching experiences, provide an effective framework to teach critical thinking in the intensive care unit. The five strategies are: make the thinking process explicit by helping learners understand that the brain uses two cognitive processes: type 1, an intuitive pattern-recognizing process, and type 2, an analytic process; discuss cognitive biases, such as premature closure, and teach residents to minimize biases by expressing uncertainty and keeping differentials broad; model and teach inductive reasoning by utilizing concept and mechanism maps and explicitly teach how this reasoning differs from the more commonly used hypothetico-deductive reasoning; use questions to stimulate critical thinking: “how” or “why” questions can be used to coach trainees and to uncover their thought processes; and assess and provide feedback on learner’s critical thinking. We believe these five strategies provide practical approaches for teaching critical thinking in the intensive care unit.

Keywords: medical education, critical thinking, critical care, cognitive errors

Critical thinking, the capacity to be deliberate about thinking and actively assess and regulate one’s cognition ( 1 – 4 ), is an essential skill for all physicians. Absent critical thinking, one typically relies on heuristics, a quick method or shortcut for problem solving, and can fall victim to cognitive biases ( 5 ). Cognitive biases can lead to diagnostic errors, which result in increased patient morbidity and mortality ( 6 ).

Diagnostic errors are the number one cause of medical malpractice claims ( 7 ) and are thought to account for approximately 10% of in-hospital deaths ( 8 ). Many factors contribute to diagnostic errors, including cognitive problems and systems issues ( 9 ), but it has been shown that cognitive errors are an important source of diagnostic error in almost 75% of cases ( 10 ). In addition, a recent report from the Risk Management Foundation, the research arm of the malpractice insurer for the Harvard Medical School hospitals, labeled more than half of the malpractice cases they evaluated as “assessment failures,” which included “narrow diagnostic focus, failure to establish a differential diagnosis, [and] reliance on a chronic condition of previous diagnosis ( 11 ).” In light of these data and the Institute of Medicine’s 2015 recommendation to “enhance health care professional education and training in the diagnostic process ( 8 ),” we present this framework as a practical approach to teaching critical thinking skills in the intensive care unit (ICU).

The process of critical thinking can be taught ( 3 ); however, methods of instruction are challenging ( 12 ), and there is no consensus on the most effective teaching model ( 13 , 14 ). Explicit teaching about reasoning, metacognition, cognitive biases, and debiasing strategies may help avoid cognitive errors ( 3 , 15 , 16 ) and enhance critical thinking ( 17 ), but empirical evidence to inform best educational practices is lacking. Assessment of critical thinking is also difficult ( 18 ). However, because it is of paramount importance to providing high-quality, safe, and effective patient care, we believe critical thinking should be both explicitly taught and explicitly assessed ( 12 , 18 ).

Critical thinking is particularly important in the fast-paced, high-acuity environment of the ICU, where medical errors can lead to serious harm ( 19 ). Despite the paucity of data to support an educational gold standard in this field, we propose five strategies, based on educational principles, we have found effective in teaching critical thinking in the ICU ( Figure 1 ). These strategies are not dependent on one another and often overlap. Using the following case scenario as an example for discussion, we provide a detailed explanation, as well as practical tips on how to employ these strategies.

A 45-year-old man with a history of hypertension presents to the emergency department with fatigue, sore throat, low-grade fever, and mild shortness of breath. On arrival to the emergency department, his heart rate is 110 and his blood pressure is 90/50 mm Hg. He is given 2 L fluids, but his blood pressure continues to fall, and norepinephrine is started. Physical examination is normal with the exception of dry mucous membranes. Laboratory studies performed on blood samples obtained before administration of intravenous fluid show: white blood cell count, 6.0 K/uL; hematocrit, 35%; lactate, 0.8 mmol/L; blood urea nitrogen, 40 mg/dL; and creatinine, 1.1 mg/dL. A chest radiograph shows no infiltrates. He is admitted to the medical intensive care unit. Attending: What is your assessment of this patient? Resident: This is a 45-year-old male with a history of hypertension who was sent to us from the emergency department with sepsis. Attending: That is interesting. I am puzzled: What is the source of infection? And how do you account for the low hematocrit in an essentially healthy man whom you believe to be volume depleted? Resident: Well, maybe pneumonia will appear on the X-ray in the next 24 hours. With respect to the hematocrit...I’m not really sure.

Figure 1.

Five strategies to teach critical thinking skills in a critical care environment.

Strategy 1: Make the “Thinking Process” Explicit

In the ICU, many attendings are satisfied with the trainee simply putting forth an assessment and plan. In the case presented here, the resident’s assessment that the patient has sepsis is likely based on the resident remembering a few facts about sepsis (i.e., hypotension is not responsive to fluids) and recognizing a pattern (history of possible infection + fever + hypotension = sepsis). With this information, we may determine that the learner is operating at the lowest level of Bloom’s taxonomy: remembering ( 20 ) ( Figure 2 ), in this case, she seems to be using reflexive or automatic thought. In a busy ICU, it is tempting for the attending to simply overlook the response and proceed with one’s own plan, but we should be expecting more. As indicated in the attending’s response, we should make the thinking process explicit and push the resident up Bloom’s taxonomy: to describe, explain, apply, analyze, evaluate, and ultimately create ( 20 ) ( Figure 2 ).

Figure 2.

The revised Bloom’s taxonomy. This schematic, first created in 1956, depicts six levels of the cognitive domain. Remembering is the lowest level; creating is the highest level. Adapted from Anderson and Krathwol ( 20 ).

Faculty members should probe the thought process used to arrive at the assessment and encourage the resident to think about her thinking; that is, to engage in the process of metacognition. We recommend doing this in real time as the trainee is presenting the case by asking “how” and “why” questions (see strategy 4).

Attending: Why do you think he has sepsis? Resident: Well, he came in with infectious symptoms. Also, his blood pressure is quite low, and it only improved slightly with fluids in the emergency department. Attending: Okay, but how is blood pressure generated? How could you explain hypotension using other data in the case, such as the low hematocrit?

If the trainee is encouraged to think about her thinking, she may conclude that she was trying to force a “pattern” of sepsis, perhaps because she frequently sees patients with sepsis and because the emergency department framed the case in that way. It is possible that she does not have enough experience in the ICU or specific knowledge about sepsis to accurately assess this patient; in the actual case, a third-year resident with significant ICU experience ultimately admitted to defaulting to pattern recognition.

One way to push learners up Bloom’s taxonomy is to help them understand dual-process theory: the idea that the brain uses two thinking processes, type 1 and type 2 (alternately known as system 1 and system 2). Type 1 thinking is the more intuitive process of decision making; type 2 is an analytical process ( 17 , 21 , 22 ). Type 1 thinking is immediate and unconscious, and the hallmark is pattern recognition; type 2 is deliberate and effortful ( 17 ).

Critical thinkers understand and recognize the dual processes ( 21 ) and the fact that type I thinking is common in their daily lives. Furthermore, they acknowledge that type 1 reasoning, which is often automatic and unconscious, can be prone to error. There is a paucity of data linking cognitive errors to the particular type of thinking ( 14 ), but many of these studies are plagued by the fact that they do not test the atypical pattern. As a consequence, they do not truly test the hypothesis that type 2 reasoning will reduce error in more complex cases. It has been shown that combining type 1 and type 2 thinking improves diagnostic accuracy compared with just using one method versus another ( 23 ). We believe that helping learners understand how their minds work will help them recognize when they may be falling into pattern recognition and when this will be problematic (e.g., when there are discordant data, or one can only quickly think of one diagnosis). By expecting more from our learners, by compelling them to understand, analyze, and evaluate, we must provide constant feedback and coaching to help them develop, and we must ask the right questions (see strategy 4) to guide them.

Strategy 2: Discuss Cognitive Biases and De-Biasing Strategies

Cognitive biases are thought patterns that deviate from the typical way of decision making or judging ( 24 ). These occur commonly when we are under stress or time constrained when making decisions. At this time, there are more than 100 described cognitive biases, some of which are more common in medicine than others ( 25 ). We believe that the six outlined in Table 1 are particularly prevalent in the ICU.

Six common biases frequently used in the intensive care unit

The definitions of these biases are based on their application and use in clinical medicine. Table adapted from Croskerry ( 6 ), Croskerry ( 27 ), and Hogarth ( 37 ).

Although there are many proponents of teaching cognitive biases ( 6 ), there are no studies showing that teaching these to trainees improves their clinical decision making ( 14 ), again recognizing that research in this area has often not focused on the scenarios in which cognitive bias is likely to lead to error. Most cognitive biases are quiescent until the right scenario presents itself ( 26 ), which makes them difficult to study in the clinical context. Imagine an overworked, tired resident in a busy ICU or one who received an incomplete sign-out or felt pressure from the system to make a quick decision to move along patient care. These scenarios occur daily in the ICU; as a consequence, we believe that teaching residents how to recognize biases and giving them strategies to debias is important.

The resident in the clinical scenario outlined here is falling prey to many biases in her assessment that the patient has sepsis. First, it is likely that on her ICU rotation she has seen many patients with sepsis, and thus sepsis is a diagnosis that is easily available to her mind (availability bias). Next, she is falling victim to confirmation bias: The presence of hypotension supports a diagnosis of sepsis and is disproportionately appreciated by the trainee compared with a white blood cell count of 6,000, which does not easily fit with the diagnosis and is ignored. Next, she anchors and prematurely closes on the diagnosis of sepsis and does not look for other possible explanations of hypotension. The resident does not realize that she is subject to these biases; explicitly discussing them will help her understand her thinking process, enable her to recognize when she may be jumping to conclusions, and help her identify when she must switch to type 2 thinking.

Attending: Why do you think he has sepsis? Resident: Well, he came in with infectious symptoms. Also, his blood pressure is quite low, and it did not improve with fluids in the emergency department. This is similar to the other patient with sepsis. Attending: I can see why sepsis easily comes to your mind, as we have recently admitted three other patients with sepsis. These patients had similar features to this patient, so your mind is jumping to that conclusion, but if we stop and think together about what pieces of the case don’t fit with sepsis, we may come up with a different diagnosis. Resident: Well, the lack of leukocytosis doesn’t make sense. Attending: Yes! I agree, that is a bit odd. Let’s broaden our differential and not anchor on sepsis. What else could this be?

Cognitive forcing strategies ( 16 ), the process of making trainees aware of their cognitive biases and then developing strategies to overcome the bias, may help this resident. Studies show that debiasing can be taught to emergency medicine trainees ( 27 ), and we believe it can also be taught to critical care trainees, who experience a similar fast-paced and high-stakes learning environment. Proposed debiasing strategies include encouraging trainees to consider alternative diagnoses ( 3 , 6 , 27 , 28 ) and promoting broad differentials. In particular, they need to be able to rethink cases when confronted with information that is not consistent with the working diagnosis; for example, leukocytosis, as above. They should be allowed to communicate their level of uncertainty, and we should not think less of them if they do not have a single final answer with a targeted plan ( 29 ). When we do not discuss inconsistent information, we essentially give trainees permission to ignore it.

Attending: In addition to the white blood cell count not fitting, I’m also struggling with the hematocrit: How is it 35% in the setting of presumed decreased intravascular volume? Resident: Hmm.... I’m actually not sure. You’re right, though, it doesn’t make sense. Attending: I agree. Let’s pause and think about how we are thinking about this case .

To a large degree, recognition of cognitive bias requires metacognition, defined as thinking about one’s thinking ( 3 , 16 , 27 ). This process is optimized with a familiarity with how the mind works; that is, a basic understanding of dual-process theory and cognitive biases. In the ICU, we find it easiest to engage in a group metacognition exercise. The attending asks, “How are we thinking about this case?” This allows both the attending and the team to reflect together on how and why the diagnosis has been made. This can provide insight into the tendency to prematurely close or limit considerations, which has been shown to be the most common cause of inaccurate clinical synthesis ( 10 ).

Other debiasing strategies include accountability ( 6 ) and feedback ( 25 , 30 ). Giving specific and in-the-moment feedback can help residents understand their decisions ( 25 ). It is our job as attendings to provide this feedback, and it is thought that this is one of the most effective debiasing strategies ( 25 ).

Strategy 3: Model and Teach Inductive Reasoning

In medicine, we classically teach clinical reasoning via the hypothetico-deductive strategy ( 31 ) and rarely discuss inductive reasoning. To date, there are no data proving the advantages of one strategy over another, but we believe that modeling inductive reasoning is an important part of critical thinking, especially when type 1 thinking provides limited answers. In hypothetico-deductive reasoning, physicians make a cognitive jump from a few facts to hypotheses framed as a differential diagnosis from which one then deduces characteristics that are matched to the patient ( 32 ). Because this way of thinking relies on memory and pattern recognition, we find that it is more subject to cognitive biases, including premature closure, than inductive reasoning.

In our case, the presence of hypotension leads the trainee to come up with a differential based primarily on that single observation; the resident thinks of diagnoses such as sepsis or cardiogenic shock. Contrast this way of thinking with inductive reasoning, which proceeds in an orderly way from multiple facts to hypotheses ( 32 ). In our case, putting together the facts of hypotension, decreased hematocrit, and elevated blood urea nitrogen/creatinine would lead to a broader list of possible explanations or hypotheses that would include bleeding (see Figure 3 to compare and contrast inductive and deductive reasoning). We propose that this way of thinking is grounded more deeply in pathophysiology, and we believe it leads to broader thinking, because trainees do not have to rely on memory, pattern recognition, or heuristics; rather, they can reason their way through the problem via an understanding of basic mechanisms of health and disease.

Figure 3.

Schematic representations of deductive ( 1 ) and inductive ( 2 ) reasoning apropos to the clinical case. In deductive reasoning, one fact ( F ; hypotension ) is used to generated multiple hypotheses ( H ), and then facts that pertain to each are retrofitted ( red F* ; fever ). In inductive reasoning, facts are grouped and used to generate hypotheses. Adapted from Pottier ( 32 ).

Inductive reasoning can be practiced using both mechanism and concept maps. Mechanism maps are a visual representation of how the pathophysiology of disease leads to the clinical symptoms ( 33 ), whereas concept maps graphically represent relationships between multiple concepts ( 33 ) and make links explicit. Both types reinforce mechanistic thinking and can be used as tools to avoid cognitive biases. Using our case as an example, if the resident started with the hypotension and made a mechanism ( Figure 4A ) or concept ( Figure 4B ) map, she would be less likely to anchor on the diagnosis of sepsis. This process gives trainees a strategy to broaden their differential and a way to think about the case when they do not know what is going on.

Figure 4.

( A ) A mechanism map of a 45-year-old man presenting with cough, shortness of breath. Found to have an increased BUN/Cr ration, a decreased hematocrit, and a normal white blood cell count. ( B ) A concept map of the clinical case. AFib = atrial fibrillation; BUN/Cr = blood urea nitrogen to creatinine ratio; CAD = coronary artery disease; CO/Q = cardiac output; CVP = central venous pressure; CXR = chest X-ray; GI = gastrointestinal; HR = heart rate; Hx HTN = history of hypertension; MAP = mean arterial pressure; RV = right ventricle; SV = stroke volume; SVR = systemic vascular resistance; WBC = white blood cell.

Although critics contend that these maps take time and do not have a place in the ICU, we find that quickly sketching a mechanism map on rounds while the case is being presented only takes 1–2 minutes and is a powerful way of making your method of clinical reasoning explicit to the learner. This can also be done later as a way to review pathophysiology. We hold monthly concept mapping sessions for our students ( 34 ) to improve their clinical reasoning skills, but find that in the ICU with residents, doing this quickly in real time with a mechanism map is more effective.

Strategy 4: Use Questions to Stimulate Critical Thinking

Questions can be used to engage the learners and inspire them to think critically. When questioning trainees, it is important to avoid the “quiz show” type questions that just test whether a trainee can recall a fact (e.g., “What is the most common cause of X”?). In our current advanced technological age, answers to this type of question reveal less about thinking abilities than how adept one is at searching the internet. These questions do not provide insight into the trainee’s understanding but can, we fear, subtly emphasize that the practice of medicine is about memorization, rather than thinking. In addition, this type of question is often perceived by the trainee as “pimping.” This can belittle the trainee while securing the attending physician’s place of power ( 35 ) and create a hostile learning environment.

Attending: Why do you think this patient is hypotensive? Attending: How does the BUN/creatinine ratio relate to the hypotension? Attending: How would you expect the intravascular volume depletion to affect his hematocrit?

Questions like these allow the trainee to elaborate on her knowledge, which feels much safer to the learner and provides the attending insight into her thinking.

Resident: If my theory of sepsis were correct, I would think the patient would be intravascularly dry and have a higher hematocrit. The fact that it is only 35% and that his BUN/creatinine ratio is consistent with a prerenal picture is making me worried that maybe the hypotension is not from sepsis but, rather, from bleeding. I think we need to evaluate for gastrointestinal bleeding.

When the right questions are used to coach the resident, her thought processes are uncovered and she can be guided to the correct diagnosis. Although experience and domain-specific knowledge are important, data indicate that in the majority of malpractice cases involving diagnostic error, the problem is not that the doctor did not know the diagnosis; rather, she did not think of it. Reasoning, rather than knowledge, is key to avoiding mistakes in cases with confounding data.

Strategy 5: Assess Your Learner’s Critical Thinking

It is difficult, but necessary for trainee development, to assess critical thinking ( 18 ). Milestones, ranging from challenged and unreflective thinkers to accomplished critical thinkers, have been proposed ( 18 ). This approach is helpful not only for providing feedback to trainees on their critical thinking but also to give the trainees a framework to guide reflection on how they are thinking (see Table 2 for a description of the milestones).

Milestones of critical thinking and the descriptions of each stage

Note that “Challenged thinker” is in italics because any thinker can be challenged as a result of environmental pressures or time constraints. Adapted from Papp ( 18 ).

It is important to note that anyone, even accomplished critical thinkers, can become “challenged critical thinkers” when the environment precludes critical thinking. This is particularly relevant in critical care. In a busy ICU, one is often faced with time pressure, which contributes to premature closure. In our case presented earlier, perhaps the resident had limited time to admit this patient, and thus settled on the diagnosis of sepsis. It is our hope that teaching trainees to recognize this risk will lead to fewer cognitive biases. Imagine a different exchange between faculty and resident:

Attending: How are you doing with the new admission? How are you thinking about the case? Resident: I’m concerned this is sepsis, but there are few pieces that don’t fit. However, given the two other admissions and the cardiac arrest on the floor who is heading our way, I haven’t been able to give this case as much thought as I would like to. Attending: Okay, do you want to work through the case together? Or could I help with some other tasks so you have more time to think about this?

This type of response reflects a practicing critical thinker: one who is aware of her limitations and thinking processes. This can only occur, however, if the attending creates an environment in which critical thinking is valued by making a safe space and asking the right questions.

Conclusions

The ICU is a high-acuity, fast-paced, and high-stakes environment in which critical thinking is imperative. Despite the limited empirical evidence to guide faculty on best teaching practices for enhancing reasoning skills, it is our hope that these strategies will provide practical approaches for teaching this topic in the ICU. Given how fast medical knowledge grows and how rapidly technology allows us to find factual information, it is important to teach enduring principles, such as how to think.

Our job in the ICU, where literal life-and-death decisions are made daily, is to teach trainees to focus on how we actually think about problems and to uncover cognitive biases that cause flawed thinking and may lead to diagnostic error. The focus of the preclerkship curriculum at the undergraduate level is increasingly moving away from transfer of content to application of knowledge ( 36 ). When teaching residents and fellows, faculty should also emphasize thinking skills by making the thinking process explicit, discussing cognitive biases, and debiasing strategies, modeling and teaching inductive reasoning, using questions to stimulate curiosity, and assessing critical thinking skills.

As Albert Einstein said, “Education... is not the learning of facts, but the training of the mind to think...” ( 38 ).

Supplementary Material

Author Contributions : M.M.H. contributed to manuscript drafting, figure creation, and editing; S.C. contributed to figure creation, critical review, and editing; and R.M.S. contributed to figure creation, critical review, and editing.

Author disclosures are available with the text of this article at www.atsjournals.org .

  • 1. Scriven M, Paul R. Critical thinking as defined by the national council for excellence in critical thinking. Presented at the 8th Annual International Conference on Critical Thinking and Education Reform; August 1987; Rohnert Park, California. [ Google Scholar ]
  • 2. Huang GC, Newman LR, Schwartzstein RM. Critical thinking in health professions education: summary and consensus statements of the Millennium Conference 2011. Teach Learn Med. 2014;26:95–102. doi: 10.1080/10401334.2013.857335. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 3. Croskerry P. From mindless to mindful practice--cognitive bias and clinical decision making. N Engl J Med. 2013;368:2445–2448. doi: 10.1056/NEJMp1303712. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 4. Facione P. Critical thinking: What it is and why it counts. Millbrae, CA: The California Academic Press; 2011. [ Google Scholar ]
  • 5. Tversky A, Kahneman D. Judgement under uncertainty: heuristics and biases. Science. 1974;185:1124–1131. doi: 10.1126/science.185.4157.1124. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 6. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78:775–780. doi: 10.1097/00001888-200308000-00003. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 7. Saber Tehrani AS, Lee H, Mathews SC, Shore A, Makary MA, Pronovost PJ, Newman-Toker DE. 25-year summary of US malpractice claims for diagnostic errors 1986-2010: an analysis from the National Practitioner Data Bank. BMJ Qual Saf. 2013;22:672–680. doi: 10.1136/bmjqs-2012-001550. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 8. National Academies of Sciences, Engineering, and Medicine. Improving diagnosis in health care. Washington, DC: The National Academies Press; 2015. [ Google Scholar ]
  • 9. Lambe KA, O’Reilly G, Kelly BD, Curristan S. Dual-process cognitive interventions to enhance diagnostic reasoning: a systematic review. BMJ Qual Saf. 2016;25:808–820. doi: 10.1136/bmjqs-2015-004417. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 10. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165:1493–1499. doi: 10.1001/archinte.165.13.1493. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 11. Hoffman J. editor. 2014 Annual benchmarking report: malpractice risks in the diagnostic process. Cambridge, MA: CRICO Strategies; 2014 [accessed 2017 Feb 22]. Available from: https://psnet.ahrq.gov/resources/resource/28612/2014-annual-benchmarking-report-malpractice-risks-in-the-diagnostic-process .
  • 12. Willingham DT. Critical thinking, why is it so hard to teach? Am Educ. 2007 Summer:8–19. [ Google Scholar ]
  • 13. Wellbery C. Flaws in clinical reasoning: a common cause of diagnostic error. Am Fam Physician. 2011;84:1042–1048. [ PubMed ] [ Google Scholar ]
  • 14. Norman GR, Monteiro SD, Sherbino J, Ilgen JS, Schmidt HG, Mamede S. The causes of errors in clinical reasoning: cognitive biases, knowledge deficits, and dual process thinking. Acad Med. 2017;92:23–30. doi: 10.1097/ACM.0000000000001421. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 15. Croskerry P.Diagnostic failure: a cognitive and affective approach Henriksen K, Battles JB, Marks ES, Lewin DI.editors. Advances in patient safety: from research to implementation. Volume 2: concepts and methodology Rockville, MD: Agency for Healthcare Research and Quality; 2005241–254. [ PubMed ] [ Google Scholar ]
  • 16. Croskerry P. Cognitive forcing strategies in clinical decisionmaking. Ann Emerg Med. 2003;41:110–120. doi: 10.1067/mem.2003.22. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 17. Croskerry P, Petrie DA, Reilly JB, Tait G. Deciding about fast and slow decisions. Acad Med. 2014;89:197–200. doi: 10.1097/ACM.0000000000000121. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 18. Papp KK, Huang GC, Lauzon Clabo LM, Delva D, Fischer M, Konopasek L, Schwartzstein RM, Gusic M. Milestones of critical thinking: a developmental model for medicine and nursing. Acad Med. 2014;89:715–720. doi: 10.1097/ACM.0000000000000220. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 19. Garrouste Orgeas M, Timsit JF, Soufir L, Tafflet M, Adrie C, Philippart F, Zahar JR, Clec’h C, Goldran-Toledano D, Jamali S, et al. Outcomerea Study Group. Impact of adverse events on outcomes in intensive care unit patients. Crit Care Med. 2008;36:2041–2047. doi: 10.1097/CCM.0b013e31817b879c. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 20. Anderson LW, Krathwold DR. Taxonomy for learning, teaching and assessing: a revision of Bloom’s taxonomy of educational objectives. New York: Longman; 2001. [ Google Scholar ]
  • 21. Kahneman D. Thinking fast and slow. New York: Farrar, Straus and Giroux; 2011. [ Google Scholar ]
  • 22. Evans JS, Stanovich KE. Dual-process theories of higher cognition: advancing the debate. Perspect Psychol Sci. 2013;8:223–241. doi: 10.1177/1745691612460685. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 23. Ark TK, Brooks LR, Eva KW. Giving learners the best of both worlds: do clinical teachers need to guard against teaching pattern recognition to novices? Acad Med. 2006;81:405–409. doi: 10.1097/00001888-200604000-00017. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 24. Haselton MG, Nettle D, Andrews PW. The evolution of cognitive bias. In: Buss DM, editor. The handbook of evolutionary psychology. Hoboken, NJ: John Wiley & Sons Inc.; 2005. pp. 724–746. [ Google Scholar ]
  • 25. Elstein AS. Thinking about diagnostic thinking: a 30-year perspective. Adv Health Sci Educ Theory Pract. 2009;14:7–18. doi: 10.1007/s10459-009-9184-0. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 26. Reason J. Human error. Cambridge: Cambridge University Press; 1990. [ Google Scholar ]
  • 27. Croskerry P. Achieving quality in clinical decision making: cognitive strategies and detection of bias. Acad Emerg Med. 2002;9:1184–1204. doi: 10.1111/j.1553-2712.2002.tb01574.x. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 28. Arkes HA. Impediments to accurate clinical judgment and possible ways to minimize their impact. In: Arkes HR, Hammond KR, editors. Judgement and decision making: an interdisciplinary reader. New York: Cambridge University Press; 1986. pp. 582–592. [ Google Scholar ]
  • 29. Simpkin AL, Schwartzstein RM. Tolerating uncertainty- the next medical revolution? N Engl J Med. 2016;375:1713–1715. doi: 10.1056/NEJMp1606402. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 30. Croskerry P. The feedback sanction. Acad Emerg Med. 2000;7:1232–1238. doi: 10.1111/j.1553-2712.2000.tb00468.x. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 31. Bowen JL. Educational strategies to promote clinical diagnostic reasoning. N Engl J Med. 2006;355:2217–2225. doi: 10.1056/NEJMra054782. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 32. Pottier P, Hardouin J-B, Hodges BD, Pistorius MA, Connault J, Durant C, Clairand R, Sebille V, Barrier JH, Planchon B. Exploring how students think: a new method combining think-aloud and concept mapping protocols. Med Educ. 2010;44:926–935. doi: 10.1111/j.1365-2923.2010.03748.x. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 33. Guerrero APS. Mechanistic case diagramming: a tool for problem-based learning. Acad Med. 2001;76:385–389. doi: 10.1097/00001888-200104000-00020. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 34. Richards J, Schwartzstein R, Irish J, Almeida J, Roberts D. Clinical physiology grand rounds. Clin Teach. 2013;10:88–93. doi: 10.1111/j.1743-498X.2012.00614.x. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 35. Kost A, Chen FM. Socrates was not a pimp: changing the paradigm of questioning in medical education. Acad Med. 2015;90:20–24. doi: 10.1097/ACM.0000000000000446. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 36. Krupat E, Richards JB, Sullivan AM, Fleenor TJ, Jr, Schwartzstein RM. Assessing the effectiveness of case-based collaborative learning via randomized controlled trial. Acad Med. 2016;91:723–729. doi: 10.1097/ACM.0000000000001004. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 37. Hogarth RM. Judgement and choice: the psychology of decision. Chichester: Wiley; 1980. [ Google Scholar ]
  • 38. Frank P. Einstein: his life and times. New York: Da Capo Press; 2002.

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

  • View on publisher site
  • PDF (701.3 KB)
  • Collections

Similar articles

Cited by other articles, links to ncbi databases.

  • Download .nbib .nbib
  • Format: AMA APA MLA NLM

Add to Collections

This website is intended for healthcare professionals

British Journal of Nursing

  • { $refs.search.focus(); })" aria-controls="searchpanel" :aria-expanded="open" class="hidden lg:inline-flex justify-end text-gray-800 hover:text-primary py-2 px-4 lg:px-0 items-center text-base font-medium"> Search

Search menu

Critical thinking: what it is and why it counts. 2020. https://tinyurl.com/ybz73bnx (accessed 27 April 2021)

Faculty of Intensive Care Medicine. Curriculum for training for advanced critical care practitioners: syllabus (part III). version 1.1. 2018. https://www.ficm.ac.uk/accps/curriculum (accessed 27 April 2021)

Guerrero AP. Mechanistic case diagramming: a tool for problem-based learning. Acad Med.. 2001; 76:(4)385-9 https://doi.org/10.1097/00001888-200104000-00020

Harasym PH, Tsai TC, Hemmati P. Current trends in developing medical students' critical thinking abilities. Kaohsiung J Med Sci.. 2008; 24:(7)341-55 https://doi.org/10.1016/S1607-551X(08)70131-1

Hayes MM, Chatterjee S, Schwartzstein RM. Critical thinking in critical care: five strategies to improve teaching and learning in the intensive care unit. Ann Am Thorac Soc.. 2017; 14:(4)569-575 https://doi.org/10.1513/AnnalsATS.201612-1009AS

Health Education England. Multi-professional framework for advanced clinical practice in England. 2017. https://www.hee.nhs.uk/sites/default/files/documents/multi-professionalframeworkforadvancedclinicalpracticeinengland.pdf (accessed 27 April 2021)

Health Education England, NHS England/NHS Improvement, Skills for Health. Core capabilities framework for advanced clinical practice (nurses) working in general practice/primary care in England. 2020. https://www.skillsforhealth.org.uk/images/services/cstf/ACP%20Primary%20Care%20Nurse%20Fwk%202020.pdf (accessed 27 April 2021)

Health Education England. Advanced practice mental health curriculum and capabilities framework. 2020. https://www.hee.nhs.uk/sites/default/files/documents/AP-MH%20Curriculum%20and%20Capabilities%20Framework%201.2.pdf (accessed 27 April 2021)

Jacob E, Duffield C, Jacob D. A protocol for the development of a critical thinking assessment tool for nurses using a Delphi technique. J Adv Nurs.. 2017; 73:(8)1982-1988 https://doi.org/10.1111/jan.13306

Kohn MA. Understanding evidence-based diagnosis. Diagnosis (Berl).. 2014; 1:(1)39-42 https://doi.org/10.1515/dx-2013-0003

Clinical reasoning—a guide to improving teaching and practice. 2012. https://www.racgp.org.au/afp/201201/45593

McGee S. Evidence-based physical diagnosis, 4th edn. Philadelphia PA: Elsevier; 2018

Norman GR, Monteiro SD, Sherbino J, Ilgen JS, Schmidt HG, Mamede S. The causes of errors in clinical reasoning: cognitive biases, knowledge deficits, and dual process thinking. Acad Med.. 2017; 92:(1)23-30 https://doi.org/10.1097/ACM.0000000000001421

Papp KK, Huang GC, Lauzon Clabo LM Milestones of critical thinking: a developmental model for medicine and nursing. Acad Med.. 2014; 89:(5)715-20 https://doi.org/10.1097/acm.0000000000000220

Rencic J, Lambert WT, Schuwirth L., Durning SJ. Clinical reasoning performance assessment: using situated cognition theory as a conceptual framework. Diagnosis.. 2020; 7:(3)177-179 https://doi.org/10.1515/dx-2019-0051

Examining critical thinking skills in family medicine residents. 2016. https://www.stfm.org/FamilyMedicine/Vol48Issue2/Ross121

Royal College of Emergency Medicine. Emergency care advanced clinical practitioner—curriculum and assessment, adult and paediatric. version 2.0. 2019. https://tinyurl.com/eps3p37r (accessed 27 April 2021)

Young ME, Thomas A, Lubarsky S. Mapping clinical reasoning literature across the health professions: a scoping review. BMC Med Educ.. 2020; 20 https://doi.org/10.1186/s12909-020-02012-9

Advanced practice: critical thinking and clinical reasoning

Sadie Diamond-Fox

Senior Lecturer in Advanced Critical Care Practice, Northumbria University, Advanced Critical Care Practitioner, Newcastle upon Tyne Hospitals NHS Foundation Trust, and Co-Lead, Advanced Critical/Clinical Care Practitioners Academic Network (ACCPAN)

View articles · Email Sadie

Advanced Critical Care Practitioner, South Tees Hospitals NHS Foundation Trust

View articles

diagnostic reasoning critical thinking

Clinical reasoning is a multi-faceted and complex construct, the understanding of which has emerged from multiple fields outside of healthcare literature, primarily the psychological and behavioural sciences. The application of clinical reasoning is central to the advanced non-medical practitioner (ANMP) role, as complex patient caseloads with undifferentiated and undiagnosed diseases are now a regular feature in healthcare practice. This article explores some of the key concepts and terminology that have evolved over the last four decades and have led to our modern day understanding of this topic. It also considers how clinical reasoning is vital for improving evidence-based diagnosis and subsequent effective care planning. A comprehensive guide to applying diagnostic reasoning on a body systems basis will be explored later in this series.

The Multi-professional Framework for Advanced Clinical Practice highlights clinical reasoning as one of the core clinical capabilities for advanced clinical practice in England ( Health Education England (HEE), 2017 ). This is also identified in other specialist core capability frameworks and training syllabuses for advanced clinical practitioner (ACP) roles ( Faculty of Intensive Care Medicine, 2018 ; Royal College of Emergency Medicine, 2019 ; HEE, 2020 ; HEE et al, 2020 ).

Rencic et al (2020) defined clinical reasoning as ‘a complex ability, requiring both declarative and procedural knowledge, such as physical examination and communication skills’. A plethora of literature exists surrounding this topic, with a recent systematic review identifying 625 papers, spanning 47 years, across the health professions ( Young et al, 2020 ). A diverse range of terms are used to refer to clinical reasoning within the healthcare literature ( Table 1 ), which can make defining their influence on their use within the clinical practice and educational arenas somewhat challenging.

The concept of clinical reasoning has changed dramatically over the past four decades. What was once thought to be a process-dependent task is now considered to present a more dynamic state of practice, which is affected by ‘complex, non-linear interactions between the clinician, patient, and the environment’ ( Rencic et al, 2020 ).

Cognitive and meta-cognitive processes

As detailed in the table, multiple themes surrounding the cognitive and meta-cognitive processes that underpin clinical reasoning have been identified. Central to these processes is the practice of critical thinking. Much like the definition of clinical reasoning, there is also diversity with regard to definitions and conceptualisation of critical thinking in the healthcare setting. Facione (2020) described critical thinking as ‘purposeful reflective judgement’ that consists of six discrete cognitive skills: analysis, inference, interpretation, explanation, synthesis and self–regulation. Ross et al (2016) identified that critical thinking positively correlates with academic success, professionalism, clinical decision-making, wider reasoning and problem-solving capabilities. Jacob et al (2017) also identified that patient outcomes and safety are directly linked to critical thinking skills.

Harasym et al (2008) listed nine discrete cognitive steps that may be applied to the process of critical thinking, which integrates both cognitive and meta-cognitive processes:

  • Gather relevant information
  • Formulate clearly defined questions and problems
  • Evaluate relevant information
  • Utilise and interpret abstract ideas effectively
  • Infer well-reasoned conclusions and solutions
  • Pilot outcomes against relevant criteria and standards
  • Use alternative thought processes if needed
  • Consider all assumptions, implications, and practical consequences
  • Communicate effectively with others to solve complex problems.

There are a number of widely used strategies to develop critical thinking and evidence-based diagnosis. These include simulated problem-based learning platforms, high-fidelity simulation scenarios, case-based discussion forums, reflective journals as part of continuing professional development (CPD) portfolios and journal clubs.

Dual process theory and cognitive bias in diagnostic reasoning

A lack of understanding of the interrelationship between critical thinking and clinical reasoning can result in cognitive bias, which can in turn lead to diagnostic errors ( Hayes et al, 2017 ). Embedded within our understanding of how diagnostic errors occur is dual process theory—system 1 and system 2 thinking. The characteristics of these are described in Table 2 . Although much of the literature in this area regards dual process theory as a valid representation of clinical reasoning, the exact causes of diagnostic errors remain unclear and require further research ( Norman et al, 2017 ). The most effective way in which to teach critical thinking skills in healthcare education also remains unclear; however, Hayes et al (2017) proposed five strategies, based on well-known educational theory and principles, that they have found to be effective for teaching and learning critical thinking within the ‘high-octane’ and ‘high-stakes’ environment of the intensive care unit ( Table 3 ). This is arguably a setting that does not always present an ideal environment for learning given its fast pace and constant sensory stimulation. However, it may be argued that if a model has proven to be effective in this setting, it could be extrapolated to other busy clinical environments and may even provide a useful aide memoire for self-assessment and reflective practices.

Integrating the clinical reasoning process into the clinical consultation

Linn et al (2012) described the clinical consultation as ‘the practical embodiment of the clinical reasoning process by which data are gathered, considered, challenged and integrated to form a diagnosis that can lead to appropriate management’. The application of the previously mentioned psychological and behavioural science theories is intertwined throughout the clinical consultation via the following discrete processes:

  • The clinical history generates an initial hypothesis regarding diagnosis, and said hypothesis is then tested through skilled and specific questioning
  • The clinician formulates a primary diagnosis and differential diagnoses in order of likelihood
  • Physical examination is carried out, aimed at gathering further data necessary to confirm or refute the hypotheses
  • A selection of appropriate investigations, using an evidence-based approach, may be ordered to gather additional data
  • The clinician (in partnership with the patient) then implements a targeted and rationalised management plan, based on best-available clinical evidence.

Linn et al (2012) also provided a very useful framework of how the above methods can be applied when teaching consultation with a focus on clinical reasoning (see Table 4 ). This framework may also prove useful to those new to the process of undertaking the clinical consultation process.

Evidence-based diagnosis and diagnostic accuracy

The principles of clinical reasoning are embedded within the practices of formulating an evidence-based diagnosis (EBD). According to Kohn (2014) EBD quantifies the probability of the presence of a disease through the use of diagnostic tests. He described three pertinent questions to consider in this respect:

  • ‘How likely is the patient to have a particular disease?’
  • ‘How good is this test for the disease in question?’
  • ‘Is the test worth performing to guide treatment?’

EBD gives a statistical discriminatory weighting to update the probability of a disease to either support or refute the working and differential diagnoses, which can then determine the appropriate course of further diagnostic testing and treatments.

Diagnostic accuracy refers to how positive or negative findings change the probability of the presence of disease. In order to understand diagnostic accuracy, we must begin to understand the underlying principles and related statistical calculations concerning sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and likelihood ratios.

The construction of a two-by-two square (2 x 2) table ( Figure 1 ) allows the calculation of several statistical weightings for pertinent points of the history-taking exercise, a finding/sign on physical examination, or a test result. From this construct we can then determine the aforementioned statistical calculations as follows ( McGee, 2018 ):

  • Sensitivity , the proportion of patients with the diagnosis who have the physical sign or a positive test result = A ÷ (A + C)
  • Specificity , the proportion of patients without the diagnosis who lack the physical sign or have a negative test result = D ÷ (B + D)
  • Positive predictive value , the proportion of patients with disease who have a physical sign divided by the proportion of patients without disease who also have the same sign = A ÷ (A + B)
  • Negative predictive value , proportion of patients with disease lacking a physical sign divided by the proportion of patients without disease also lacking the sign = D ÷ (C + D)
  • Likelihood ratio , a finding/sign/test results sensitivity divided by the false-positive rate. A test of no value has an LR of 1. Therefore the test would have no impact upon the patient's odds of disease
  • Positive likelihood ratio = proportion of patients with disease who have a positive finding/sign/test, divided by proportion of patients without disease who have a positive finding/sign/test OR (A ÷ N1) ÷ (B÷ N2), or sensitivity ÷ (1 – specificity) The more positive an LR (the further above 1), the more the finding/sign/test result raises a patient's probability of disease. Thresholds of ≥ 4 are often considered to be significant when focusing a clinician's interest on the most pertinent positive findings, clinical signs or tests
  • Negative likelihood ratio = proportion of patients with disease who have a negative finding/sign/test result, divided by the proportion of patients without disease who have a positive finding/sign/test OR (C ÷ N1) ÷ (D÷N1) or (1 – sensitivity) ÷ specificity The more negative an LR (the closer to 0), the more the finding/sign/test result lowers a patient's probability of disease. Thresholds <0.4 are often considered to be significant when focusing clinician's interest on the most pertinent negative findings, clinical signs or tests.

diagnostic reasoning critical thinking

There are various online statistical calculators that can aid in the above calculations, such as the BMJ Best Practice statistical calculators, which may used as a guide (https://bestpractice.bmj.com/info/toolkit/ebm-toolbox/statistics-calculators/).

Clinical scoring systems

Evidence-based literature supports the practice of determining clinical pretest probability of certain diseases prior to proceeding with a diagnostic test. There are numerous validated pretest clinical scoring systems and clinical prediction tools that can be used in this context and accessed via various online platforms such as MDCalc (https://www.mdcalc.com/#all). Such clinical prediction tools include:

  • 4Ts score for heparin-induced thrombocytopenia
  • ABCD² score for transient ischaemic attack (TIA)
  • CHADS₂ score for atrial fibrillation stroke risk
  • Aortic Dissection Detection Risk Score (ADD-RS).

Conclusions

Critical thinking and clinical reasoning are fundamental skills of the advanced non-medical practitioner (ANMP) role. They are complex processes and require an array of underpinning knowledge of not only the clinical sciences, but also psychological and behavioural science theories. There are multiple constructs to guide these processes, not all of which will be suitable for the vast array of specialist areas in which ANMPs practice. There are multiple opportunities throughout the clinical consultation process in which ANMPs can employ the principles of critical thinking and clinical reasoning in order to improve patient outcomes. There are also multiple online toolkits that may be used to guide the ANMP in this complex process.

  • Much like consultation and clinical assessment, the process of the application of clinical reasoning was once seen as solely the duty of a doctor, however the advanced non-medical practitioner (ANMP) role crosses those traditional boundaries
  • Critical thinking and clinical reasoning are fundamental skills of the ANMP role
  • The processes underlying clinical reasoning are complex and require an array of underpinning knowledge of not only the clinical sciences, but also psychological and behavioural science theories
  • Through the use of the principles underlying critical thinking and clinical reasoning, there is potential to make a significant contribution to diagnostic accuracy, treatment options and overall patient outcomes

CPD reflective questions

  • What assessment instruments exist for the measurement of cognitive bias?
  • Think of an example of when cognitive bias may have impacted on your own clinical reasoning and decision making
  • What resources exist to aid you in developing into the ‘advanced critical thinker’?
  • What resources exist to aid you in understanding the statistical terminology surrounding evidence-based diagnosis?

This site is intended for healthcare professionals

Principles of diagnostic reasoning

Stylised illustration of a man showing his pharmacist that his head hurts, with her showing him an open laptop

Wes Mountain/The Pharmaceutical Journal

By the end of this article, you will be able to:

  • Understand the role of fast and slow thinking during diagnostic reasoning;
  • Know how to structure your diagnostic approach;
  • Use illness scripts as a tool for developing your diagnostic efficiency.

RPS Competency Framework for All Prescribers

This article aims to support the development of knowledge and skills related to the following competencies : 

Domain 1: Assess the patient (1.6, 1.7, 1.8, 1.10, 1.11, 1.12)

  • Takes and documents an appropriate medical, psychological, and medical history including allergies and intolerances;
  • Undertakes and documents an appropriate clinical assessment;
  • Identifies and addresses potential vulnerabilities that may be causing the patient/carer to seek treatment;
  • Requests and interprets relevant investigations necessary to inform treatment options;
  • Makes, confirms or understands, and documents the working or final diagnosis by systematically considering the various possibilities (differential diagnosis);
  • Understands the condition(s) being treated, their natural progression, and how to assess their severity, deterioration and anticipated response to treatment.

Introduction

Diagnostic reasoning is a concept often used interchangeably with terms such as ‘clinical reasoning’, ‘clinical problem solving’ and ‘clinical decision making’. Collectively, it is recognised that these terms are representing the central idea proposed by Barrows and Tamblyn of describing the “cognitive process necessary to evaluate and manage a patient’s medical problem” ​[1,2]​ . A broader definition encompasses the idea that diagnostic reasoning is the conscious and unconscious interpretation of patient data, and the consideration of risks and benefits of actions to determine a working diagnosis and treatment plan ​[3]​ .

Historically, diagnostic reasoning has focused on the process of applying the information you have gained from your history taking and physical examinations to formulate your list of potential causes for the patient’s presenting complaint and creating your differential diagnosis ​[4]​ . When considering the changing role of pharmacists as independent prescribers, it can also involve the process of identifying medication-related problems, considering therapeutic options and optimizing medication regimens ​[5,6]​ .

It is recommended that you read this article in conjunction with these resources from The Pharmaceutical Journal :

  • ‘ Introduction to the prescribing consultation ‘;
  • ‘ Principles of effective history taking when prescribing ‘;
  • ‘Introduction to clinical assessment for prescribers ‘;
  • ‘ Performing a physical examination on a patient when prescribing ‘.

To help further expand your prescribing skills, additional related articles are linked throughout. You will also be able to test your knowledge by completing a short quiz at the end of the article.

Structuring diagnostic reasoning

A commonly accepted theory of diagnostic reasoning was proposed by Kahneman who describes the two different thought processes that occur when making a decision; this dual processing involves system 1 and system 2 thinking. It is acknowledged that clinicians interchange and combine their diagnostic reasoning method depending on the scenario and their own experience ​[7]​ .

System 1 and system 2 thinking

System 1 — intuitive system : the fast, automatic reaction to information based on mental shortcuts formed from patterns or habits. This is often triggered when dealing with common, typical, and uncomplicated presentations.

System 2 — hypotheticodeductive system : the slow, systematic, controlled process based on conscious judgement, logic and the range of probabilities being considered. This is often triggered if the presentation is not recognised, atypical or ambiguous. 

Watch this short video for more commentary on system 1 and 2 thinking applied in a clinical context.

Effective diagnostic reasoning often utilises both system 1 and system 2 thinking and requires a combination of experience and skills (pattern recognition, critical thinking,  communication skills ,  evidence-based practice , teamwork and  reflection ) ​[8]​ . The reasoning process needs to be considered as comprising four discrete stages: information gathering, hypothesis generation, hypothesis testing and reflection ​[9]​ . We will now briefly consider each. 

1. Information gathering         

The first step of diagnostic reasoning is processing the information that can be gained from the patient’s health record, history, laboratory results, tests, and  physical examination . For example, recent medication changes, blood pressure readings, creatinine clearance trends and physical features following examination (e.g. ankle swelling, shortness of breath). 

2. Hypothesis generation

From here it becomes possible to also generate a broad, if not exhaustive, list of possible conditions that could be considered at this stage of the process ​[10]​ . When there are many signs and symptoms to consider, or a vague presentation, it can be challenging to match the information gathered with a single problem or even a problem shortlist. At this stage, a system 2 approach can be adopted where clusters of observations are separated and explored systematically within themes (e.g. anatomical location, the patient’s age, timing or onset of symptoms, review of body systems or multisystem conditions). Some examples of the benefits to this approach are shown in the Table.

3. Hypothesis testing

By now, the problem list has been reviewed, rationalised and prioritised to generate the working hypothesis (or differential/working diagnosis), through a focused matching of the symptoms to possible diseases or medication-related issues and cross-matching back the associated symptoms with the patient’s presentation and medication history.

This step promotes the identification of defining features of a condition, or where they exist, discriminatory features that are unique for a particular disease ​[10]​ . This process will allow the elimination of conditions that do not match and rationalises your list of differentials to a more specific list of possibilities ​[4]​ . For example, shortness of breath and coughing without signs of infection may rule out community acquired pneumonia; coughing up pink sputum may point towards pulmonary oedema. 

Finally, further investigations to either provide pertinent negatives or confirmatory findings should be considered. For example, a patient may present with symptoms of heart failure following a recent myocardial infarction (e.g. shortness of breath, oedema) and has a raised B-type natriuretic peptide (BNP), but would require an echocardiogram to confirm diagnosis.

This working diagnosis then informs the treatment plan.

Throughout this whole process it is essential that you are vigilant to potential life-limiting conditions, including how to identify or rule out red flags and when onward referral will be required ​[4,11]​ . At all times it is imperative that you work within your scope of practice and have the requisite self-awareness to know when to refer to a more senior clinician, or involve another healthcare professional in your decision-making process.

4. Reflection

For all prescribers, but particularly those early in their career or working in a new scope of practice, an organised approach is critical to avoiding cognitive errors when generating the hypothesis of the diagnosis. Experience using system 1 thinking can allow pattern recognition to generate a diagnosis based on conditions, patients, and case reviews we have seen before and there is a risk that the prescriber may apply biases from their known experience rather than the full depth of information that has been presented. The application of a system 2 approaches offers a well-considered and more accurate diagnosis ​[11]​ .

Having formulated a working diagnosis, it may sometimes be appropriate to perform further tests to confirm the diagnosis. In 2014, Kohn identified that three pertinent questions for diagnostic reasoning when considering if further tests are required ​[12]​ :

  • How likely is the patient to have this disease?
  • How good is this ‘test’ for the disease in question?
  • Is the test worth performing to guide treatment?

Finally, the identified problem and the reasoning to support this should be succinctly documented in the patient record ​[12]​ .

The illness script method 

An alternative understanding and application of diagnostic reasoning is the use of ‘illness scripts’. This is where a clinician relies upon prior learning and their first-hand experiences to recognise a pattern of clinical characteristics as clues to the potential diagnosis. An illness script is a mental cue card that represents individual diseases, including their typical casual features, the actual pathology, the resulting signs, symptoms, expected diagnostic findings and the most likely course/prognosis with suitable management. 

Illness scripts can help prescribers focus their questions during history taking, contextualise patterns of signs and symptoms and help to integrate new clinical knowledge with existing knowledge. A novice prescribing clinician will have limited experience to draw from initially so will rely upon their biomedical knowledge and have a broad range of differential options where symptoms may need to be worked through one by one. Experienced practitioners will have had time to hone their scripts so that they are quicker and can support a diagnosis more directly. Regardless of experience, the illness script requires knowledge of the epidemiological factors of the disease, the associated signs and symptoms and the pathophysiology. Some models of illness scripts also consider the time-course of the disease ​[10,13,14]​ .

We can use an illness script approach for many conditions, and you may already have some well-developed scripts for conditions you commonly see in your own clinical practice. For example, an illness script for croup could involve the following: epidemiology (infants and toddlers), pathophysiology (parainfluenza virus), presentation (fever, barky cough, stridor, worse at night) and management (hydration, paracetamol).

The following case illustrates how an illness script can be developed and used to form a diagnosis.

Case in practice

An illustration of an older man looking into the camera

Action : List the questions that you would ask the patient, what physical examination could/would you undertake? What illness scripts did you form as you read through the case presentation? How can this inform the diagnosis? 

Discussion : Several features of the presentation indicate that the patient has gout, rather a chronic condition such as arthritis, including:

  • Presentation as an acute flare of symptoms;
  • Onset of symptoms were abrupt;
  • Pain is localised and severe;
  • It is affecting a single joint.

The effectiveness of your script will have been influenced by how much you already knew about gout. Awareness of the epidemiology of the condition, the temporal pattern of symptom presentation and predisposing conditions, specifically any (single joint) or discriminating symptoms (episodic acute pain) will have helped you to reach a diagnosis directly. Novice practitioners whose illness script for gout is still forming may need to spend longer working through the information gathered before they can differentiate the patient’s disease from others that may have a similar presentation.

The script will have included ideas to eliminate other diagnostic possibilities and to weigh up the possibilities of the likely diagnoses. For example, you would have asked questions about acute trauma or injuries to exclude fractures and ruled out the unlikely condition of septic arthritis as the patient is not presenting with any systemic signs of infection. 

For more information on gout, see: ‘ Treatment and Management of gout: the role of pharmacy ‘.

While learning to understand the processes of diagnostic reasoning, it is also important to recognise the potential sources of error, which may include ​[15]​ : 

  • Anchoring bias: the tendency to formulate and fix your idea early in the process and not adjust thinking if new information becomes available;
  • Availability heuristic: Assuming a diagnosis because you have seen lots of recent cases;
  • Confirmation bias: Seeking evidence to support your initial impression/diagnosis and ignoring information that does not support your diagnosis;
  • Diagnostic momentum: Relying on previous clinicians’ diagnostic decisions and ignoring new information that may be contradictory;
  • Framing effect: How the information is presented influences the diagnosis (e.g. emphasising or excluding certain clinical variables or elements such as CD4 count, abdominal pain weight loss, anxiety, polysubstance abuse can lead to different diagnostic considerations, ranging from viral gastroenteritis, hyperthyroidism, malnutrition or drug toxicity);
  • Representation error: Not considering the prevalence for a condition when predicting the likelihood of the condition;
  • Visceral bias: where either negative or positive feelings towards the patient influences your decisions and diagnosis.

Tips and guidance for good diagnostic reasoning

There are several things prescribers can do to help guard against these common sources of diagnostic error ​[16]​ :

  • Slow down. Although time is often pressured there are also threats to sufficient information gathering and unconscious biases when you are rushing;
  • Consider the likelihood of the presenting problems — “common things are common”. That being said, a broad awareness of some more rare diseases is required, as well as openness to consider them;
  • Consider what information you have and focus on what is truly relevant;
  • Actively consider alternative diagnoses. Pay specific attention to the potential for life limiting conditions and presenting symptoms. For example, it is important to rule out temporal arteritis in those presenting with headache if scalp tenderness is present;
  • Ask active questions to rule out or disprove conditions that are part of your working diagnosis (distinguishing factors of those diseases). For example, chest pain that occurs only on inspiration compared to that occurring at rest may point to a respiratory rather than a cardiac cause;
  • Consider the implications of a wrong diagnosis.

The principles of diagnostic reasoning are founded on the pharmacist undertaking effective continual professional development and practicing evidence-based diagnosis and medicine. Professional experience and exposure to different illnesses will grow naturally over time but it is important that prescribers continuously invest in developing their knowledge of the conditions being treated, their natural progression and how to assess their severity, deterioration and anticipated response to treatment.

The following scenario provides an opportunity to explore some of the concepts introduced in this article and apply them to practice.

An illustration of an older man looking into the camera

Your physical examination identified that the pharynx is inflamed, chest examination is clear and there is no impaired motion of any joints in the body. The laboratory results show a raised CRP, Hb 116g/L, raised white cell count, temperature 38.4 o C.

Action : From the details provided here: write a list of your most likely differential diagnosis and any additional tests that you may request.

Using the cluster of observation approach (see Table), how could you reach a diagnosis for the differential list you have created?

Discussion (applying system 2 thinking) : Taking the anatomic location approach, you start with the head and explore the history further to identify that the source of pain is in the throat and mostly on swallowing. Your physical observations match this, with pustular and inflamed appearance of the tonsils. The results of the rapid antigen test are positive. 

Considering the age of the patient, the common differentials for the symptoms as identified for the location could include viral or bacterial sore throats, with the antigen test making bacterial infection most likely. The patient’s age indicates there is risk of glandular fever (mononucleosis) and understanding the onset of symptoms can be a useful determining factor here. 

Linking the onset of the different signs and symptoms can help to differentiate the story and likely diagnosis. For this case, as the exhaustion predates the sore throat, this differentiates between the two most common concerns, which are glandular fever and streptococcus sore throat. In the absence of laboratory test results, this is a key component of the diagnostic reasoning that would support a diagnosis of bacterial sore throat. The exhaustion/lethargy in this case is unrelated to the sore throat and could be linked to other conditions or lifestyle. Conversely, had the sore throat persisted for weeks and then the lethargy started later, this would be more indicative of glandular fever. 

This diagnosis can be corroborated by looking at the involvement of other body systems. The systems in this case are localised and the more general signs and symptoms of fever and aches can be linked back to your likely differential diagnosis. If other systems were involved, for example abdominal tenderness, this could influence your diagnostic reasoning and prompt you to further explore the potential diagnosis of glandular fever.

As the presenting patient was young and there was no recorded history of smoking or occupational hazards, there were numerous conditions that were not part of your initial differential list. 

Reflections on the case : How would your diagnostic reasoning change if the patient presenting was a 59-year-old male with a 40-pack-year smoking history and recent unplanned weight loss? 

How would your diagnostic reasoning change if the patient presenting was a 36-year female refugee living in shared accommodation? 

For more information on assessment of sore throat and use of Centor and FEVERPAIN prediction tools, see: 

  • ‘ Introduction to clinical assessment for prescribers ‘;
  • ‘ Case-based learning: sore throat ‘.

Knowledge check

Quiz summary.

0 of 5 Questions completed

Information

You have already completed the quiz before. Hence you can not start it again.

Quiz is loading...

You must sign in or sign up to start the quiz.

You must first complete the following:

Quiz complete. Results are being recorded.

0 of 5 Questions answered correctly

Time has elapsed

You have reached 0 of 0 point(s), ( 0 )

Earned Point(s): 0 of 0 , ( 0 ) 0 Essay(s) Pending (Possible Point(s): 0 )

  • Not categorized 0%

1 . Question

What type of thinking is being used when you are relying upon your fast thinking and pattern recognition to form a diagnosis?

  • System 1 thinking
  • System 2 thinking

2 . Question

Which four of the following are considered strategies for good diagnostic reasoning and safety netting? 

  • Working quickly to get to the answer
  • Consider the likelihood of the presenting problems
  • Consider what information you have and focus on what is irrelevant
  • Actively consider alternative diagnoses, including potentially life limiting conditions
  • Ask active questions to rule out or disprove conditions that are part of your working diagnosis
  • Consider the implications of a wrong diagnosis

3 . Question

Seeking evidence to support your diagnosis and or ignoring information that does not support your diagnosis is known as:

  • Confirmation bias
  • Diagnostic momentum
  • Framing effect
  • Representation error

4 . Question

True or false — age is not relevant when creating the differential diagnosis list.

5 . Question

Diagnostic reasoning only applies to the formulation of a new diagnosis

Expanding your scope of practice

The following resources expand on the information contained in this article:

  • ‘ How to use clinical reasoning in pharmacy ‘, The Pharmaceutical Journal ;
  • ‘ Effective practitioner: core skills of decision making ‘, NHS Education Scotland;
  • ‘ How to apply evidence to practice ‘, The Pharmaceutical Journal .
  • 1 Barrows HS, Tamblyn RM. Problem-based learning: an approach to medical education . New York: Springer 1980. https://app.nova.edu/toolbox/instructionalproducts/edd8124/fall11/1980-BarrowsTamblyn-PBL.pdf (accessed February 2024)
  • 2 Round A. Introduction to clinical reasoning. Evaluation Clinical Practice. 2001;7:109–17. https://doi.org/10.1046/j.1365-2753.2001.00252.x
  • 3 Dy-Boarman EA, Bryant GA, Herring MS. Faculty preceptors’ strategies for teaching clinical reasoning skills in the advanced pharmacy practice experience setting. Currents in Pharmacy Teaching and Learning. 2021;13:623–7. https://doi.org/10.1016/j.cptl.2021.01.023
  • 4 Bickley L, Szilagyi P, Hoffman R, et al. Bates’ guide to physical examination and history taking . 13th ed. Philadelphia: Wolters Kluwer; 2021.
  • 5 Wright DFB, Anakin MG, Duffull SB. Clinical decision-making: An essential skill for 21st century pharmacy practice. Research in Social and Administrative Pharmacy. 2019;15:600–6. https://doi.org/10.1016/j.sapharm.2018.08.001
  • 6 Eva KW. What every teacher needs to know about clinical reasoning. Med Educ. 2005;39:98–106. https://doi.org/10.1111/j.1365-2929.2004.01972.x
  • 7 Kahneman D. Thinking fast and slow . 1st ed. New York: Farrar, Straus and Giroux 2011.
  • 8 Effective Practitioner. Core skills of decision making. NHS Education for Scotland. https://www.effectivepractitioner.nes.scot.nhs.uk/clinical-practice/core-skills-of-decision-making.aspx (accessed February 2024)
  • 9 Trimble M, Hamilton P. The thinking doctor: clinical decision making in contemporary medicine. Clin Med. 2016;16:343–6. https://doi.org/10.7861/clinmedicine.16-4-343
  • 10 Bowen JL. Educational Strategies to Promote Clinical Diagnostic Reasoning. N Engl J Med. 2006;355:2217–25. https://doi.org/10.1056/nejmra054782
  • 11 Sackett D. The rational clinical examination. A primer on the precision and accuracy of the clinical examination. JAMA . 1992;267:2638–44.
  • 12 Kohn MA. Understanding evidence-based diagnosis. Diagnosis. 2014;1:39–42. https://doi.org/10.1515/dx-2013-0003
  • 13 Gavinski K, Covin YN, Longo PJ. Learning How to Build Illness Scripts. Academic Medicine. 2019;94:293–293. https://doi.org/10.1097/acm.0000000000002493
  • 14 Charlin B, Boshuizen HPA, Custers EJ, et al. Scripts and clinical reasoning. Medical Education. 2007;41:1178–84. https://doi.org/10.1111/j.1365-2923.2007.02924.x
  • 15 Croskerry P. The Importance of Cognitive Errors in Diagnosis and Strategies to Minimize Them. Academic Medicine. 2003;78:775–80. https://doi.org/10.1097/00001888-200308000-00003
  • 16 Klein JG. Five pitfalls in decisions about diagnosis and prescribing. BMJ. 2005;330:781–3. https://doi.org/10.1136/bmj.330.7494.781

Please leave a comment  Cancel reply

You must be logged in to post a comment.

You might also be interested in…

Stuck in the 1950s: why UTI diagnosis badly needs an update

Stuck in the 1950s: why UTI diagnosis badly needs an update

blood test

Blood test developed that can screen and locate cancers

Illustration of an NHS prescription pad with the trans flag as the cover

Safety first: delivering trans-inclusive care is everyone’s responsibility

Advertisement

Advertisement

Diagnostic reasoning in internal medicine: a practical reappraisal

  • IM - REVIEW
  • Open access
  • Published: 01 December 2020
  • Volume 16 , pages 273–279, ( 2021 )

Cite this article

You have full access to this open access article

diagnostic reasoning critical thinking

  • Gino Roberto Corazza   ORCID: orcid.org/0000-0001-9532-0573 1 , 3 ,
  • Marco Vincenzo Lenti 1 &
  • Peter David Howdle 2  

5727 Accesses

13 Citations

3 Altmetric

Explore all metrics

The practice of clinical medicine needs to be a very flexible discipline which can adapt promptly to continuously changing surrounding events. Despite the huge advances and progress made in recent decades, clinical reasoning to achieve an accurate diagnosis still seems to be the most appropriate and distinctive feature of clinical medicine. This is particularly evident in internal medicine where diagnostic boundaries are often blurred. Making a diagnosis is a multi-stage process which requires proper data collection, the formulation of an illness script and testing of the diagnostic hypothesis. To make sense of a number of variables, physicians may follow an analytical or an intuitive approach to clinical reasoning, depending on their personal experience and level of professionalism. Intuitive thinking is more typical of experienced physicians, but is not devoid of shortcomings. Particularly, the high risk of biases must be counteracted by de-biasing techniques, which require constant critical thinking. In this review, we discuss critically the current knowledge regarding diagnostic reasoning from an internal medicine perspective.

Similar content being viewed by others

How to perform a critical appraisal of diagnostic tests: 7 steps, a model for clinical decision-making in medicine, defining and measuring diagnostic uncertainty in medicine: a systematic review.

Avoid common mistakes on your manuscript.

Introduction

The burden of disease is always changing [ 1 ] and the current COVID-19 pandemic [ 2 ] represents an example of this phenomenon. Health systems tend to adapt to the changing burden of disease by developing a variety of strategies including more precise and advanced techniques, such as molecular analysis, genetic mapping, enhanced imaging modalities, or innovative and targeted drugs [ 3 ]. However, even in such a time of transition and technological advance, clinical medicine remains an area dominated by uncertainty and probability, and a correct diagnostic reasoning, the pre-requisite for correct management, still remains the cornerstone of good clinical practice.

Making a diagnosis is a cognitive process of logic which involves an element of considering different options (i.e. categorical approximation) and is, therefore, liable to errors that result in adverse patient outcomes [ 4 ]. As already mentioned, making mistakes in clinical practice has not been alleviated by progressive technology improvement [ 5 ]; on the contrary, an overreliance on new procedures has directly increased the occurrence of such adverse outcomes [ 6 ].

A focus on diagnosis is what has been said to define and to differentiate internal medicine from other medical specialties and it has been proposed that the term which may best characterise an internist is “diagnostician” [ 7 ]. In internal medicine, patients are more “diagnostically undifferentiated” than in other specialties and, as a consequence, the diagnostic process is susceptible to a higher failure rate [ 8 ], largely due to inconsistencies in this reasoning process [ 9 , 10 ]. Conversely, particularly in internal medicine, characterised by an extremely large body of factual evidence, using a correct diagnostic methodology allows compensation for inevitable case-specific weaknesses [ 11 ].

In this paper, we discuss the various stages of the diagnostic process, the prioritisation of different diagnostic reasoning strategies and the procedures by which to detect and prevent the most common errors from a practical, clinical, internal medicine viewpoint.

The multistage process of diagnosis

Clinical diagnosis is a multistage procedure and the sequential phases of this process are shown in Fig.  1 . Irrespective of the various clinical settings (including outpatient clinic, hospital ward, or intensive care unit) a diagnosis should be accurate to be maximally effective and efficient in terms of its timeliness and correct use of resources.

figure 1

Sequential process of making a diagnosis. Any diagnostic reasoning starts from data acquisition that must be as accurate as possible. Through data acquisition, the physician is able to produce an illness script, generating diagnostic hypotheses, which will be subsequently tested

Especially in internal medicine, the first and most important phase is the patient-centred interview which takes an holistic and systematic approach. Such an interview aims at collecting structured information and at the same time offers a unique opportunity to gain the patient's confidence, trust, and adherence to guidance. These emotional and empathic aspects of communication, important as they are, are outside the scope of this review and have already been discussed in consensus documents [ 12 ].

Data collection and illness scripts

There are no codified guidelines on how to conduct data collection through a medical interview, rather it should be modelled on the cultural, social and clinical characteristics of each patient. Of course, it is necessary to avoid the collection of an amorphous mix of clinically relevant and irrelevant data and also to avoid focusing on specific organ systems, thus neglecting the patient in his or her entirety. On the other hand, asking the patient what problems led him or her to the visit, when they started, how they evolved over time, what events preceded them, what impact they had on his or her life, represent a proper and shared starting point.

At that point, the patient’s responses are translated into medical equivalents that link the case to formal knowledge [ 13 ] and it is on the basis of these collected data that the doctor should seek to outline an early and succinct “problem representation” of the patient [ 14 ]. This representation consists in translating his or her present and past history into a meaningful list of clinical problems, detailed by the use of semantic qualifiers, i.e. bipolar descriptors, such as acute/chronic, mild/severe, single/multiple, continuous/recurrent [ 15 ]. Each symptom should not be considered as a single element but, where possible, embedded with other related symptoms in a cluster or syndrome [ 16 ]. This provisional process should be developed as early as possible, since it must guide further in-depth questions and provide a focus for a physical examination, although that should still be as comprehensive as possible. Thus, through a process of progressively characterising and prioritising the various elements the diagnostician can increasingly define the picture and differentiate it from others, so that an illness script is configured in the clinician’s mind, that is, a story-like narration of disease in which predisposing conditions, pathophysiological mechanisms, and clinical features are articulated [ 17 ]. It is a partially automatic experiential process, but precisely for this reason, it is not universally available. The possibility and ease of script triggering depend on the pre-stored expertise of each doctor, that is, on the individual repertoire of knowledge and clinical experience. For example, faced with a young woman with chronic diarrhoea as the main complaint, a defined script, such as that of Fig.  2 , will be more easily retrieved by an internist who has already encountered similar cases. Furthermore, illness scripts have a series of inherent limitations. For example, the information contained in a script is not exclusive of it but can belong to several scripts, the activation of one particular script may lead to the activation of a different one due to a sharing effect, and scripts can only interpret the instances of an illness at that particular timepoint and are limited to it [ 18 ].

figure 2

An illness script of a patient with chronic diarrhoea. The illness script is made of three distinct components, namely the predisposing conditions, the pathophysiological mechanisms, and the clinical features. The interaction among the various components generates diagnostic hypotheses

Models of diagnostic reasoning and hypothesis generation

The configuration in a doctor's mind of a particular script, whether it is correct or not, is essential for the hypothesis generation of disease (Fig.  1 ), which must subsequently be confirmed or ruled out. These early stages of the diagnostic process are crucial because an appropriate problem representation, to which a consistent and meaningful illness script is connected, avoids random-generated hypotheses based on isolated and clinically irrelevant findings which lead to incorrect conclusions or diagnoses [ 14 ].

Although the process of making a diagnosis should theoretically be as flexible and as adaptable as possible and encompass a range of principles and modes of reasoning [ 19 ], currently used approaches can be schematically represented into two distinct models, that is, either the intuitive or the analytical approach, the main characteristics of which are listed in Table 1 .

The intuitive approach, inductive and empirically based, makes use, below the threshold of perceptible consciousness, of mental shortcuts (heuristics), allows for quick conclusions, and does not require the use of specific resources. It synthesises the information collected and leads to a consideration of the clinical picture as a whole. Symptoms and findings considered in clusters are quickly and automatically related to a prototype of disease through a pattern recognition process that is already present in the doctor’s mental database [ 20 , 21 ]. It is clear that for an experienced clinician, the retrieval of encapsulated knowledge, which then leads to a diagnostic hypothesis, starts with data collection and, gradually taking shape, is implemented with the configuration of an illness script.

It must be recognised that this intuitive model rests on a proven scientific background, and has nothing to do with the so-called “gut feeling” which may be used in an initial general assessment by a general practitioner. This consists of an intuitive feeling of alarm (more rarely of reassurance), which usually allows the doctor to distinguish what is urgent from what is not [ 22 ]. In the setting of general practice, a very detailed diagnosis is often a less pressing goal than an appropriate and timely referral [ 23 ] and, accordingly, the gut feeling has a more prognostic than diagnostic value.

The analytical approach, the other main diagnostic reasoning model, based on a hypothetical-deductive logic, relies on the conscious and deliberate use of evidence-based algorithms or decision trees, which test the relative probability of various hypotheses starting from the signs or symptoms considered as more relevant or typical, rather than from the clinical picture as a whole [ 24 , 25 , 26 ]. This second type of approach, which requires more time and use of resources, is instinctively and prudently preferred by novices who have a limited repertoire of clinical experience, or adopted in the case of atypical or rare presentations of disease which are not immediately indicative of a specific illness script. In general, with increasing clinical practice and thereby expertise, the thought processes of younger, less experienced doctors move from the analytical to the intuitive one [ 27 ].

We have few data allowing comparison of the diagnostic success between these two different approaches. In a study involving 20 final-year clinical clerks and 20 expert clinicians who were given the same diagnostic questions, the odds of diagnostic success turned out to be greater when strategies of pattern recognition were used, while, as expected, the novices used a hypothetical-deductive way of reasoning [ 28 ]. Further studies are, however, necessary before firm conclusions can be reached. It is likely that differences in medical knowledge between the groups have influenced the results more than the adopted strategy and it is not possible to exclude an unconscious sharing of a certain degree of intuitive reasoning in those who had adopted a more analytical approach [ 20 ]. On the other hand, it has long been known that, regardless of the general level of clinical expertise, the adoption of one model or the other, often depends on the “content specificity” of the clinical case, since the success in solving a problem is not a strong predictor of being able to solve successfully another of a different nature [ 29 ]. In this regard, if we refer to the illness script reported in Fig.  2 , while for some doctors it will be necessary to use very costly, time-consuming algorithms that may need invasive procedures for the study of chronic diarrhoea, for others, already content-aware of the problem, the presence of predisposing (gender and age), defining (chronic diarrhoea, iron-deficiency anaemia, history of delayed menarche) or differentiating findings (absence of abdominal pain which minimises the suspicion of irritable bowel syndrome or Crohn’s disease), will quickly and quite obviously promote the hypothesis of celiac disease.

In conclusion, if it is true that we have no evidence about the superiority of the intuitive versus the analytical model, or vice versa, it is equally true that we should not consider these approaches mutually exclusive. For some time, an integrated approach has been proposed, combining the strengths of each [ 30 ]. While according to some, such integration would take place in the final stages of diagnostic reasoning with the analytical model prevailing over the intuitive one [ 31 ], according to others, the two models constitute a bidirectional continuum of mutual integration, within which the intuitive model is expected to prevail in the early stages of hypothesis generation and the analytical one in the final stages of hypothesis testing [ 32 ].

Assessing post-history probability

Since whichever method of reasoning we use is qualitative in nature, this explains why clinicians are natural Bayesians [ 33 ]. Bayes’ theorem allows an estimation of the post-test probability of a certain disease given its pre-test probability (the prevalence of that disease in the underlying population or the clinician's belief about its prevalence) and the accuracy of the test. There is no doubt that only a small minority of clinicians, if any, formally practice in this way [ 34 ], and that some of them are unaware of the real prevalence of a disease in relation to individual patient’s characteristics (e.g. family and personal anamnesis, life habits, therapy, symptoms or clinical signs) [ 35 ]. Post-test probability of disease depends, in addition to its frequency, on a series of additional data sets (e.g. test results, symptoms, cluster of symptoms) that emerge in the various stages of the diagnostic processing pathway [ 36 ].

At any rate, it is by applying such a process to the illness script reported in Fig.  2 that the likelihood of coeliac disease becomes a strong probability.

Hypothesis testing

This is the third phase of the sequential diagnostic process (Fig.  1 ). To confirm or not the previously formulated diagnostic hypotheses, and also to quantify the severity of the disease, the physician can be aided by blood tests—from the most common to the most specific functional tests—to assess the extent of organ damage, imaging techniques—from traditional to the most modern—and invasive endoscopic procedures, either separately or in various combinations. The selection of these tests or procedures must not be made fortuitously to hit an unknown target through a “shotgun” approach, as this often leads to results which are difficult to interpret and contradictory to clinical conclusions. Like the previous phases of the diagnostic process, hypothesis testing must be specific and intentional, as well as following the criteria of cost-effectiveness and “choosing wisely” campaigns [ 37 ].

Bias and debiasing

A heuristic process, as we have seen, represents an essential part of intuitive thinking, and is the preferred way of diagnostic reasoning for most expert clinicians. It is a very useful tool, but the clinician must maintain a critical attitude to his or her decision-making to avoid a number of cognitive biases, which can lead to potentially serious consequences for the patient and have been the subject of extensive reviews [ 38 , 39 , 40 ]. The most common biases consist of (1) availability bias, i.e. the tendency to consider a diagnosis which is easily retrievable because it is common, exceptional or severe; (2) representative bias, i.e. the tendency to consider only the typical or characteristic manifestations of a disease without taking into account any possible atypical manifestations; (3) confirmatory bias, i.e. the tendency to seek confirmation and to reject disproval to a given hypothesis; (4) anchoring bias, i.e. the tendency to consider correct the diagnosis already formulated in spite of contrary evidence; (5) premature closure bias, i.e. the loss of important information due to the early conclusion of the diagnostic procedure. This latter represents a very frequent bias, although it is usually linked to the analytical approach [ 41 ]. Table 2 shows practical examples of the aforementioned biases in relation to the illness script reported in Fig.  2 .

As stated above, intuitive thinking often develops below the threshold of awareness; it is not easy, therefore, to avoid such biases. Hence, de-biasing strategies have been developed [ 39 , 40 , 42 ]; however, their initiation in routine clinical practice does presuppose an uncommon critical attitude. Furthermore, most of these de-biasing strategies have been shown to be ineffective in clinical practice. In particular, educational interventions with trainees [ 43 ], the use of a de-biasing checklist [ 44 ], or slowing down the reasoning process and being more thoughtful [ 45 ], did not improve the diagnostic process.

To summarise, being aware of these biases, of the mechanisms that underlie them and of the need to maintain continuous vigilance towards them currently constitutes the safeguard that can contain these errors [ 46 ], as well as the integration of the intuitive approach with the analytical one [ 30 ]. Other alternatives are not currently available, and what is important is that the possibility of cognitive bias must not, and should not, interfere or limit the use of an intuitive way of diagnostic reasoning.

Conclusions

It is universally acknowledged that clinical practice is a difficult role and that the bed-side is certainly a more uncomfortable place than the bench-side, given that this latter deals experimentally with one variable whereas the former needs to manage multiple variables at one time within the same patient [ 47 ]. In clinical medicine, compared to other natural sciences, levels of certainty are reduced and degrees of freedom are increased and, as a method of analysis, mathematics gives way to statistical probability [ 48 ]. The classical characteristic of clinical medicine is represented by the diagnosis, and this is why the fundamentals of diagnostic reasoning should be a core skill taught by medical schools [ 49 ].

However, it remains controversial as to whether clinical/diagnostic reasoning should be taught or whether it can only be learned by students at the bedside, independently of teaching staff [ 50 ]. Expert teachers are crucial in guiding the identification of conceptual and causal relationships between apparently unrelated findings [ 51 ] and in promoting and guiding qualitative feedback on the thought processes of students using metacognition, i.e. the conscious ability critically to monitor one’s thinking [ 52 ]. This process of open reflection and feedback is considered the most distinctive feature of “deliberate practice”, and is a leading theory of expertise development and maintenance [ 53 ]. It is particularly suited to the practice of internal medicine. However, important though it may be, deliberate practice is not the only predictor of expert performance. Individual ability, such as working memory capacity, i.e. the ways to efficiently store and retrieve knowledge in memory, can be equally important [ 54 ].

Internal medicine is the largest medical specialty in the United States [ 55 ] and is burdened by a higher rate of diagnostic errors than others [ 10 , 56 ]. Its main task is to coordinate the care of adult or elderly patients with chronic, complex and often multiple diseases [ 57 ]. Because of these characteristics, internal medicine is considered the domain of clinical complexity, to which multiple morbidity, and the patient’s individual characteristics and contextual non-biological factors contribute [ 58 ].

Everything that has been discussed above about the various stages of the diagnostic process applies to internal medicine. In particular, aspects related to the need for flexibility and adaptability must be considered including switching from one diagnostic reasoning strategy to another according to contextual needs. What must be stressed is that when faced with complex patients, a reductionist mindset, focused on individual problems, must give way to a way of thinking in terms of the various systems and elements interacting together [ 59 ]. This way of reasoning aligns well with the intuitive approach. Considering these systemic aspects of the patient and also their context enables additional predictors to add more detail to the illness script, to facilitate its retrieval, and thus to increase the consistency of the hypothesis.

The internists, therefore, are asked to broaden their critical approach to the patient’s environment and to commit themselves to seeking connections in order to synthesise the diagnostic hypothesis and to rely on their experience however limited it may be. Despite its limitations, that of diagnosis remains an example of science in action [ 60 ], in which clinical experience and the maintenance of a critical attitude are certainly important components of this science. We do agree that the experiential component of clinical expertise constitutes a safer guide in the diagnostic process than the knowledge of the latest evidence-based systematic review [ 13 ]. Indeed, the clinical competence of the physician cannot be replaced either by the multiplication of technological advances or by evidence-based medicine, of which the numerous limitations are now becoming apparent, but rather all these elements need to be integrated.

Jones DS, Podolsky SH, Greene JA (2012) The burden of disease and the changing task of medicine. N Engl J Med 366:2333–2338

CAS   PubMed   Google Scholar  

Zhu N, Zhang D, Wang W et al (2020) A novel coronavirus from patients with pneumonia in China, 2019. N Engl J Med 382:727–733

CAS   PubMed   PubMed Central   Google Scholar  

Collins FS, Varmus H (2015) A new initiative on precision medicine. N Engl J Med 372:793–795

Clark BW, Derakhshan A, Desai SV (2018) Diagnostic errors and the bedside clinical examination. Med Clin North Am 102:453–464

PubMed   Google Scholar  

Lundberg GD (1998) Low-tech autopsies in the era of high-tech medicine: continued value for quality assurance and patient safety. JAMA 280:1273–1274

Kirch W, Schafii C (1996) Misdiagnosis at a university hospital in 4 medical eras. Medicine (Baltimore) 75:29–40

CAS   Google Scholar  

Wortmann RL (1998) The clinical philosophy of internal medicine. Am J Med 104:323–326

Croskerry P (2013) From mindless to mindful practice–cognitive bias and clinical decision making. N Engl J Med 368:2445–2448

Graber ML, Franklin N, Gordon R (2005) Diagnostic error in internal medicine. Arch Intern Med 165:1493–1499

van den Berge K, Mamede S (2013) Cognitive diagnostic error in internal medicine. Eur J Intern Med 24:525–529

Corazza GR, Lenti MV, Di Sabatino A (2014) Trusting internal medicine in hard times. Intern Emerg Med 9:121–122

Platt FW, Gaspar DL, Coulehan JL et al (2001) Tell me about yourself”: the patient-centered interview. Ann Intern Med 134:1079–1085

Norman G (2006) Building on experience–the development of clinical reasoning. N Engl J Med 355:2251–2252

Bowen JL (2006) Educational strategies to promote clinical diagnostic reasoning. N Engl J Med 355:2217–2225

Bordage G, Connell KJ, Chang RW, Gecht MR, Sinacore JM (1997) Assessing the semantic content of clinical case presentations: studies of reliability and concurrent validity. Acad Med 72:S37–S39

Thammasitboon S, Cutrer WB (2013) Diagnostic decision-making and strategies to improve diagnosis. Curr Probl Pediatr Adolesc Health Care 43:232–241

Charlin B, Tardif J, Boshuizen HP (2000) Scripts and medical diagnostic knowledge: theory and applications for clinical reasoning instruction and research. Acad Med 75:182–190

Charlin B, Boshuizen HP, Custers EJ, Feltovich PJ (2007) Scripts and clinical reasoning. Med Educ 41:1178–1184

Kassirer JP, Moskowitz AJ, Lau J, Pauker SG (1987) Decision analysis: a progress report. Ann Intern Med 106:275–291

Norman G, Young M, Brooks L (2007) Non-analytical models of clinical reasoning: the role of experience. Med Educ 41:1140–1145

Brush JE Jr, Sherbino J, Norman GR (2017) How expert clinicians intuitively recognize a medical diagnosis. Am J Med 130:629–634

Stolper E, van Bokhoven M, Houben P et al (2009) The diagnostic role of gut feelings in general practice. A focus group study of the concept and its determinants. BMC Fam Pract 10:17

PubMed   PubMed Central   Google Scholar  

Yazdani S, Hosseinzadeh M, Hosseini F (2017) Models of clinical reasoning with a focus on general practice: a critical review. J Adv Med Educ Prof 5:177–184

Bordage G (1994) Elaborated knowledge: a key to successful diagnostic thinking. Acad Med 69:883–885

Eva KW (2005) What every teacher needs to know about clinical reasoning. Med Educ 39:98–106

Round A (2001) Introduction to clinical reasoning. J Eval Clin Pract 7:109–117

Woods NN, Brooks LR, Norman GR (2007) It all make sense: biomedical knowledge, causal connections and memory in the novice diagnostician. Adv Health Sci Educ Theory Pract 12:405–415

Coderre S, Mandin H, Harasym PH, Fick GH (2003) Diagnostic reasoning strategies and diagnostic success. Med Educ 37:695–703

Elstein AS, Shulman LS, Sprafka SA (1978) Medical ProblemSolving: an analysis of clinical reasoning. Harvard University Press, Cambridge

Google Scholar  

Loftus S (2012) Rethinking clinical reasoning: time for a dialogical turn. Med Educ 46:1174–1178

Croskerry P (2009) A universal model of diagnostic reasoning. Acad Med 84:1022–1028

Ark TK, Brooks LR, Eva KW (2007) The benefits of flexibility: the pedagogical value of instructions to adopt multifaceted diagnostic reasoning strategies. Med Educ 41:281–287

Gill CJ, Sabin L, Schmid CH (2005) Why clinicians are natural Bayesians [published correction appears in BMJ. 2005 Jun 11;330(7504):1369]. BMJ 330:1080–1083

Elstein AS, Schwartz A (2002) Clinical problem solving and diagnostic decision making: selective review of the cognitive literature. BMJ 324:729–732

Phelps MA, Levitt MA (2004) Pretest probability estimates: a pitfall to the clinical utility of evidence-based medicine? Acad Emerg Med 11:692–694

Summerton N (2008) The medical history as a diagnostic technology. Br J Gen Pract 58:273–276

Montano N, Costantino G, Casazza G et al (2016) The Italian Society of Internal Medicine choosing wisely campaign. Intern Emerg Med 11:1125–1130

Elstein AS (1999) Heuristics and biases: selected errors in clinical reasoning. Acad Med 74:791–794

Croskerry P (2003) The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med 78:775–780

Norman GR, Monteiro SD, Sherbino J, Ilgen JS, Schmidt HG, Mamede S (2017) The causes of errors in clinical reasoning: cognitive biases, knowledge deficits, and dual process thinking. Acad Med 92:23–30

Dhaliwal G (2017) Premature closure? Not so fast. BMJ Qual Saf 26:87–89

Scott IA (2009) Errors in clinical reasoning: causes and remedial strategies. BMJ 338:b1860

Sherbino J, Kulasegaram K, Howey E, Norman G (2014) Ineffectiveness of cognitive forcing strategies to reduce biases in diagnostic reasoning: a controlled trial. CJEM 16:34–40

Shimizu T, Matsumoto K, Tokuda Y (2013) Effects of the use of differential diagnosis checklist and general de-biasing checklist on diagnostic performance in comparison to intuitive diagnosis. Med Teach 35:e1218–e1229

Sherbino J, Dore KL, Wood TJ et al (2012) The relationship between response time and diagnostic accuracy. Acad Med 87:785–791

Redelmeier DA (2005) Improving patient care. The cognitive psychology of missed diagnoses. Ann Intern Med 142:115–120

Cox K (1995) Clinical practice is not applied scientific method. Aust NZ J Surg 65:553–557

Blois MS (1988) Medicine and the nature of vertical reasoning. N Engl J Med 318:847–851

Amey L, Donald KJ, Teodorczuk A (2017) Teaching clinical reasoning to medical students. Br J Hosp Med (Lond) 78:399–401

Schuwirth L (2002) Can clinical reasoning be taught or can it only be learned. Med Educ 36:695–696

McMillan WJ (2010) Teaching for clinical reasoning—helping students make the conceptual links. Med Teach 32:e436–e442

Groves M (2012) Understanding clinical reasoning: the next step in working out how it really works. Med Educ 46:444–446

Durning SJ, Ratcliffe T, Artino AR Jr et al (2013) How is clinical reasoning developed, maintained, and objectively assessed? Views from expert internists and internal medicine interns. J Contin Educ Health Prof 33:215–223

Kulasegaram KM, Grierson LE, Norman GR (2013) The roles of deliberate practice and innate ability in developing expertise: evidence and implications. Med Educ 47:979–989

Hemmer PA, Costa ST, DeMarco DM, Linas SL, Glazier DC, Schuster BL (2007) Predicting, preparing for, and creating the future: what will happen to internal medicine? Am J Med 120:1091–1096

Thomas EJ, Studdert DM, Burstin HR et al (2000) Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care 38:261–271

Weinberger SE (2015) Challenges for internal medicine as the American College of Physicians celebrates its 100th anniversary. Ann Intern Med 162:585–586

Corazza GR, Formagnana P, Lenti MV (2019) Bringing complexity into clinical practice: an internistic approach. Eur J Intern Med 61:9–14

Sturmberg JP (2016) Complexity mindsets at work. J Eval Clin Pract 22:101–102

Willis BH, Beebee H, Lasserson DS (2013) Philosophy of science and the diagnostic process. Fam Pract 30:501–505

Download references

Acknowledgements

Open access funding provided by Università degli Studi di Pavia within the CRUI-CARE Agreement. This research is part of a project for the study of clinical complexity (SMAC study) funded by San Matteo Hospital Foundation—Italian Ministry of Health (Progetto di Ricerca Corrente 2017—PI Prof. Gino Roberto Corazza). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Author information

Authors and affiliations.

First Department of Internal Medicine, San Matteo Hospital Foundation, University of Pavia, Pavia, Italy

Gino Roberto Corazza & Marco Vincenzo Lenti

The School of Medicine, University of Leeds, Leeds, UK

Peter David Howdle

Emeritus Professor of Internal Medicine, Clinica Medica, Fondazione IRCCS Policlinico San Matteo, Piazzale Golgi 19, 27100, Pavia, Italy

Gino Roberto Corazza

You can also search for this author in PubMed   Google Scholar

Contributions

All authors participated in the drafting of the manuscript or critical revision of the manuscript for important intellectual content, and provided approval of the final submitted version.

Corresponding author

Correspondence to Gino Roberto Corazza .

Ethics declarations

Conflict of interest.

None to disclose.

Ethics approval and consent to participate

Not applicable.

Human and animal rights

Additional information, publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Corazza, G.R., Lenti, M.V. & Howdle, P.D. Diagnostic reasoning in internal medicine: a practical reappraisal. Intern Emerg Med 16 , 273–279 (2021). https://doi.org/10.1007/s11739-020-02580-0

Download citation

Received : 20 August 2020

Accepted : 26 October 2020

Published : 01 December 2020

Issue Date : March 2021

DOI : https://doi.org/10.1007/s11739-020-02580-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Clinical reasoning
  • Internal medicine
  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Committee on Diagnostic Error in Health Care; Board on Health Care Services; Institute of Medicine; The National Academies of Sciences, Engineering, and Medicine; Balogh EP, Miller BT, Ball JR, editors. Improving Diagnosis in Health Care. Washington (DC): National Academies Press (US); 2015 Dec 29.

Cover of Improving Diagnosis in Health Care

Improving Diagnosis in Health Care.

  • Hardcopy Version at National Academies Press

2 The Diagnostic Process

This chapter provides an overview of diagnosis in health care, including the committee's conceptual model of the diagnostic process and a review of clinical reasoning. Diagnosis has important implications for patient care, research, and policy. Diagnosis has been described as both a process and a classification scheme, or a “pre-existing set of categories agreed upon by the medical profession to designate a specific condition” ( Jutel, 2009 ). 1 When a diagnosis is accurate and made in a timely manner, a patient has the best opportunity for a positive health outcome because clinical decision making will be tailored to a correct understanding of the patient's health problem ( Holmboe and Durning, 2014 ). In addition, public policy decisions are often influenced by diagnostic information, such as setting payment policies, resource allocation decisions, and research priorities ( Jutel, 2009 ; Rosenberg, 2002 ; WHO, 2012 ).

The chapter describes important considerations in the diagnostic process, such as the roles of diagnostic uncertainty and time. It also highlights the mounting complexity of health care, due to the ever-increasing options for diagnostic testing 2 and treatment, the rapidly rising levels of biomedical and clinical evidence to inform clinical practice, and the frequent comorbidities among patients due to the aging of the population ( IOM, 2008 , 2013b ). The rising complexity of health care and the sheer volume of advances, coupled with clinician time constraints and cognitive limitations, have outstripped human capacity to apply this new knowledge. To help manage this complexity, the chapter concludes with a discussion of the role of clinical practice guidelines in informing decision making in the diagnostic process.

  • OVERVIEW OF THE DIAGNOSTIC PROCESS

To help frame and organize its work, the committee developed a conceptual model to illustrate the diagnostic process (see Figure 2-1 ). The committee concluded that the diagnostic process is a complex, patient-centered, collaborative activity that involves information gathering and clinical reasoning with the goal of determining a patient's health problem. This process occurs over time, within the context of a larger health care work system that influences the diagnostic process (see Box 2-1 ). The committee's depiction of the diagnostic process draws on an adaptation of a decision-making model that describes the cyclical process of information gathering, information integration and interpretation, and forming a working diagnosis ( Parasuraman et al., 2000 ; Sarter, 2014 ).

The committee's conceptualization of the diagnostic process.

The Work System.

The diagnostic process proceeds as follows: First, a patient experiences a health problem. The patient is likely the first person to consider his or her symptoms and may choose at this point to engage with the health care system. Once a patient seeks health care, there is an iterative process of information gathering, information integration and interpretation, and determining a working diagnosis. Performing a clinical history and interview, conducting a physical exam, performing diagnostic testing, and referring or consulting with other clinicians are all ways of accumulating information that may be relevant to understanding a patient's health problem. The information-gathering approaches can be employed at different times, and diagnostic information can be obtained in different orders. The continuous process of information gathering, integration, and interpretation involves hypothesis generation and updating prior probabilities as more information is learned. Communication among health care professionals, the patient, and the patient's family members is critical in this cycle of information gathering, integration, and interpretation.

The working diagnosis may be either a list of potential diagnoses (a differential diagnosis) or a single potential diagnosis. Typically, clinicians will consider more than one diagnostic hypothesis or possibility as an explanation of the patient's symptoms and will refine this list as further information is obtained in the diagnostic process. The working diagnosis should be shared with the patient, including an explanation of the degree of uncertainty associated with a working diagnosis. Each time there is a revision to the working diagnosis, this information should be communicated to the patient. As the diagnostic process proceeds, a fairly broad list of potential diagnoses may be narrowed into fewer potential options, a process referred to as diagnostic modification and refinement ( Kassirer et al., 2010 ). As the list becomes narrowed to one or two possibilities, diagnostic refinement of the working diagnosis becomes diagnostic verification, in which the lead diagnosis is checked for its adequacy in explaining the signs and symptoms, its coherency with the patient's context (physiology, risk factors), and whether a single diagnosis is appropriate. When considering invasive or risky diagnostic testing or treatment options, the diagnostic verification step is particularly important so that a patient is not exposed to these risks without a reasonable chance that the testing or treatment options will be informative and will likely improve patient outcomes.

Throughout the diagnostic process, there is an ongoing assessment of whether sufficient information has been collected. If the diagnostic team members are not satisfied that the necessary information has been collected to explain the patient's health problem or that the information available is not consistent with a diagnosis, then the process of information gathering, information integration and interpretation, and developing a working diagnosis continues. When the diagnostic team members judge that they have arrived at an accurate and timely explanation of the patient's health problem, they communicate that explanation to the patient as the diagnosis.

It is important to note that clinicians do not need to obtain diagnostic certainty prior to initiating treatment; the goal of information gathering in the diagnostic process is to reduce diagnostic uncertainty enough to make optimal decisions for subsequent care ( Kassirer, 1989 ; see section on diagnostic uncertainty). In addition, the provision of treatment can also inform and refine a working diagnosis, which is indicated by the feedback loop from treatment into the information-gathering step of the diagnostic process. This also illustrates the need for clinicians to diagnose health problems that may arise during treatment.

The committee identified four types of information-gathering activities in the diagnostic process: taking a clinical history and interview; performing a physical exam; obtaining diagnostic testing; and sending a patient for referrals or consultations. The diagnostic process is intended to be broadly applicable, including the provision of mental health care. These information-gathering processes are discussed in further detail below.

Clinical History and Interview

Acquiring a clinical history and interviewing a patient provides important information for determining a diagnosis and also establishes a solid foundation for the relationship between a clinician and the patient. A common maxim in medicine attributed to William Osler is: “Just listen to your patient, he is telling you the diagnosis” ( Gandhi, 2000 , p. 1087). An appointment begins with an interview of the patient, when a clinician compiles a patient's medical history or verifies that the details of the patient's history already contained in the patient's medical record are accurate. A patient's clinical history includes documentation of the current concern, past medical history, family history, social history, and other relevant information, such as current medications (prescription and over-the-counter) and dietary supplements.

The process of acquiring a clinical history and interviewing a patient requires effective communication, active listening skills, and tailoring communication to the patient based on the patient's needs, values, and preferences. The National Institute on Aging, in guidance for conducting a clinical history and interview, suggests that clinicians should avoid interrupting, demonstrate empathy, and establish a rapport with patients ( NIA, 2008 ). Clinicians need to know when to ask more detailed questions and how to create a safe environment for patients to share sensitive information about their health and symptoms. Obtaining a history can be challenging in some cases: For example, in working with older adults with memory loss, with children, or with individuals whose health problems limit communication or reliable self-reporting. In these cases it may be necessary to include family members or caregivers in the history-taking process. The time pressures often involved in clinical appointments also contribute to challenges in the clinical history and interview. Limited time for clinical visits, partially attributed to payment policies (see Chapter 7 ), may lead to an incomplete picture of a patient's relevant history and current signs and symptoms.

There are growing concerns that traditional “bedside evaluation” skills (history, interview, and physical exam) have received less attention due the large growth in diagnostic testing in medicine. Verghese and colleagues noted that these methods were once the primary tools for diagnosis and clinical evaluation, but “the recent explosion of imaging and laboratory testing has inverted the diagnostic paradigm. [Clinicians] often bypass the bedside evaluation for immediate testing” ( Verghese et al., 2011 , p. 550). The interview has been called a clinician's most versatile diagnostic and therapeutic tool, and the clinical history provides direction for subsequent information-gathering activities in the diagnostic process ( Lichstein, 1990 ). An accurate history facilitates a more productive and efficient physical exam and the appropriate utilization of diagnostic testing ( Lichstein, 1990 ). Indeed, Kassirer concluded: “Diagnosis remains fundamentally dependent on a personal interaction of a [clinician] with a patient, the sufficiency of communication between them, the accuracy of the patient's history and physical examination, and the cognitive energy necessary to synthesize a vast array of information” ( Kassirer, 2014 , p. 12).

Physical Exam

The physical exam is a hands-on observational examination of the patient. First, a clinician observes a patient's demeanor, complexion, posture, level of distress, and other signs that may contribute to an understanding of the health problem ( Davies and Rees, 2010 ). If the clinician has seen the patient before, these observations can be weighed against previous interactions with the patient. A physical exam may include an analysis of many parts of the body, not just those suspected to be involved in the patient's current complaint. A careful physical exam can help a clinician refine the next steps in the diagnostic process, can prevent unnecessary diagnostic testing, and can aid in building trust with the patient ( Verghese, 2011 ). There is no universally agreed upon physical examination checklist; myriad versions exist online and in textbooks.

Due to the growing emphasis on diagnostic testing, there are concerns that physical exam skills have been underemphasized in current health care professional education and training ( Kassirer, 2014 ; Kugler and Verghese, 2010 ). For example, Kugler and Verghese have asserted that there is a high degree in variability in the way that trainees elicit physical signs and that residency programs have not done enough to evaluate and improve physical exam techniques. Physicians at Stanford have developed the “Stanford 25,” a list of physical diagnostic maneuvers that are very technique-dependent ( Verghese and Horwitz, 2009 ). Educators observe students and residents performing these 25 maneuvers to ensure that trainees are able to elicit the physical signs reliably ( Stanford Medicine 25 Team, 2015 ).

Diagnostic Testing

Over the past 100 years, diagnostic testing has become a critical feature of standard medical practice ( Berger, 1999 ; European Society of Radiology, 2010 ). Diagnostic testing may occur in successive rounds of information gathering, integration, and interpretation, as each round of information refines the working diagnosis. In many cases, diagnostic testing can identify a condition before it is clinically apparent; for example, coronary artery disease can be identified by an imaging study indicating the presence of coronary artery blockage even in the absence of symptoms.

The primary emphasis of this section focuses on laboratory medicine, anatomic pathology, and medical imaging (see Box 2-2 ). However, there are many important forms of diagnostic testing that extend beyond these fields, and the committee's conceptual model is intended to be broadly applicable. Aditional forms of diagnostic testing include, for example, screening tools used in making mental health diagnoses ( SAMHSA and HRSA, 2015 ), sleep apnea testing, neurocognitive assessment, and vision and hearing testing.

Laboratory Medicine, Anatomic Pathology, and Medical Imaging.

Although it was developed specifically for laboratory medicine, the brain-to-brain loop model is useful for describing the general process of diagnostic testing ( Lundberg, 1981 ; Plebani et al., 2011 ). The model includes nine steps: test selection and ordering, sample collection, patient identification, sample transportation, sample preparation, sample analysis, result reporting, result interpretation, and clinical action ( Lundberg, 1981 ). These steps occur during five phases of diagnostic testing: prepre-analytic, pre-analytic, analytic, post-analytic, and post-post-analytic phases. Errors related to diagnostic testing can occur in any of these five phases, but the analytic phase is the least susceptible to errors ( Eichbaum et al., 2012 ; Epner et al., 2013 ; Laposata, 2010 ; Nichols and Rauch, 2013 ; Stratton, 2011 ) (see Chapter 3 ).

The pre-pre-analytic phase, which involves clinician test selection and ordering, has been identified as a key point of vulnerability in the work process due to the large number and variety of available tests, which makes it difficult for nonspecialist clinicians to accurately select the correct test or series of tests ( Hickner et al., 2014 ; Laposata and Dighe, 2007 ). The pre-analytic phase involves sample collection, patient identification, sample transportation, and sample preparation. During the analytic phase, the specimen is tested, examined, or both. Adequate performance in this phase depends on the correct execution of a chemical analysis or morphological examination ( Hollensead et al., 2004 ), and the contribution to diagnostic errors at this step is small. The post-analytic phase includes the generation of results, reporting, interpretation, and follow-up. Ensuring accurate and timely reporting from the laboratory to the ordering clinician and patient is central to this phase. During the post-post-analytic phase, the ordering clinician, sometimes in consultation with pathologists, incorporates the test results into the patient's clinical context, considers the probability of a particular diagnosis in light of the test results, and considers the harms and benefits of future tests and treatments, given the newly acquired information. Possible factors contributing to failure in this phase include an incorrect interpretation of the test result by the ordering clinician or pathologist and the failure by the ordering clinician to act on the test results: for example, not ordering a follow-up test or not providing treatment consistent with the test results ( Hickner et al., 2014 ; Laposata and Dighe, 2007 ; Plebani and Lippi, 2011 ).

The medical imaging work process parallels the work process described for pathology. There is a pre-pre-analytic phase (the selection and ordering of medical imaging), a pre-analytic phase (preparing the patient for imaging), an analytic phase (image acquisition and analysis), a post-analytic phase (the imaging results are interpreted and reported to the ordering clinician or the patient), and a post-post-analytic phase (the integration of results into the patient context and further action). The relevant differences between the medical imaging and pathology processes include the nature of the examination and the methods and technology used to interpret the results.

Laboratory Medicine and Anatomic Pathology

In 2008 a Centers for Disease Control and Prevention (CDC) report described pathology as an “essential element of the health care system,” stating that pathology is “integral to many clinical decisions, providing physicians, nurses, and other health care providers with often pivotal information for the prevention, diagnosis, treatment, and management of disease” ( CDC, 2008 , p. 19). Primary care clinicians order laboratory tests in slightly less than one third of patient visits ( CDC, 2010 ; Hickner et al., 2014 ), and direct-to-patient testing is becoming increasingly prevalent ( CDC, 2008 ). There are now thousands of molecular diagnostic tests available, and this number is expected to increase as the mechanisms of disease at the molecular level are better understood ( CDC, 2008 ; Johansen Taber et al., 2014 ) (see Box 2-3 ).

The task of selecting the appropriate diagnostic testing is challenging for clinicians, in part because of the sheer volume of choices. For example, Hickner and colleagues (2014) found that primary care clinicians report uncertainty in ordering laboratory medicine tests in approximately 15 percent of diagnostic encounters. Choosing the appropriate test requires understanding the patient's history and current signs and symptoms, as well as having a sufficient suspicion or pre-test probability of a disease or condition (see section on probabilistic reasoning) ( Pauker and Kassirer, 1975 , 1980 ; Sox, 1986 ). The likelihood of disease is inherently uncertain in this step; for instance, the clinician's patient population may not reflect epidemiological data, and the patient's history can be incomplete or otherwise complicated. Advances in molecular diagnostic technologies and new diagnostic tests have introduced another layer of complexity. Many clinicians are struggling to keep up with the growing availability of such tests and have uncertainty about the best application of these tests in screening, diagnosis, and treatment ( IOM, 2015a ; Johansen Taber et al., 2014 ).

Diagnostic tests have “operating parameters,” including sensitivity and specificity that are particular to the diagnostic test for a specific disorder (see section on probabilistic reasoning). Even if a test is performed correctly, there is a chance for a false positive or false negative result. Test interpretation involves reviewing numerical or qualitative (yes or no) results and combining those results with patient history, symptoms, and pretest disease likelihood. Test interpretation needs to be patient-specific and to consider information learned during the physical exam and the clinical history and interview. Several studies have highlighted test interpretation errors, such as the misinterpretation of a false positive human immunodeficiency virus (HIV) screening test for a low-risk patient as indicative of HIV infection ( Gigerenzer, 2013 ; Kleinman et al., 1998 ). In addition, test performance may only be characterized in a limited patient population, leading to challenges with generalizability ( Whiting et al., 2004 ).

The laboratories that conduct diagnostic testing are some of the most regulated and inspected areas in health care (see Table 2-1 ). Some of the relevant entities include The Joint Commission and other accreditors, the federal government, and various other organizations, such as the College of American Pathologists (CAP) and the American Society for Clinical Pathology. There are many ways in which quality is assessed. Examples include proficiency testing of clinical laboratory assays and pathologists (e.g., Pap smear proficiency testing), many of which are regulated under the Clinical Laboratory Improvement Amendments, and inter-laboratory comparison programs (e.g., CAP's Q-Probes, Q-Monitors, and Q-Tracks programs).

Medical Imaging

Medical imaging plays a critical role in establishing the diagnoses for innumerable conditions and it is used routinely in nearly every branch of medicine. The advancement of imaging technologies has improved the ability of clinicians to detect, diagnose, and treat conditions while also allowing patients to avoid more invasive procedures ( European Society of Radiology, 2010 ; Gunderman, 2005 ). For many conditions (e.g., brain tumors), imaging is the only noninvasive diagnostic method available. The appropriate choice of imaging modality depends on the disease, organ, and specific clinical questions to be addressed. Computed tomography (CT) and magnetic resonance imaging (MRI) are first-line methods for assessing conditions of the central and peripheral nervous system, while for musculoskeletal and a variety of other conditions, X-ray and ultrasound are often employed first because of their relatively low cost and ready availability, with CT and MRI being reserved as problem-solving modalities. CT procedures are frequently used to assess and diagnose cancer, circulatory system diseases and conditions, inflammatory diseases, and head and internal organ injuries. A majority of MRI procedures are performed on the spine, brain, and musculoskeletal system, although usage for the breast, prostate, abdominal, and pelvic regions is rising ( IMV, 2014 ).

Medical imaging is characterized not just by the increasingly precise anatomic detail it offers but also by an increasing capacity to illuminate biology. For example, magnetic resonance spectroscopic imaging has allowed the assessment of metabolism, and a growing number of other MRI sequences are offering information about functional characteristics, such as blood perfusion or water diffusion. In addition, several new tracers for molecular imaging with PET (typically as PET/CT) have recently been approved for clinical use, and more are undergoing clinical trials, while PET/MRI was recently introduced to the clinical setting. Functional and molecular imaging data may be assessed qualitatively, quantitatively, or both. Although other forms of diagnostic testing can identify a wide array of molecular markers, molecular imaging is unique in its capacity to noninvasively show the locations of molecular processes in patients, and it is expected to play a critical role in advancing precision medicine, particularly for cancers, which often demonstrate both intra- and intertumoral biological heterogeneity ( Hricak, 2011 ).

The growing body of medical knowledge, the variety of imaging options available, and the regular increases in the amounts and kinds of data that can be captured with imaging present tremendous challenges for radiologists, as no individual can be expected to achieve competency in all of the imaging modalities. General radiologists continue to be essential in certain clinical settings, but extended training and sub-specialization are often necessary for optimal, clinically relevant image interpretation, as is involvement in multidisciplinary disease management teams. Furthermore, the use of structured reporting templates tailored to specific examinations can help to increase the clarity, thoroughness, and clinical relevance of image interpretation ( Schwartz et al., 2011 ).

Like other forms of diagnostic testing, medical imaging has limitations. Some studies have found that between 20 and 50 percent of all advanced imaging results fail to provide information that improves patient outcome, although these studies do not account for the value of negative imaging results in influencing decisions about patient management ( Hendee et al., 2010 ). Imaging may fail to provide useful information because of modality sensitivity and specificity parameters; for example, the spatial resolution of an MRI may not be high enough to detect very small abnormalities. Inadequate patient education and preparation for an imaging test can also lead to suboptimal imaging quality that results in diagnostic error.

Perceptual or cognitive errors made by radiologists are a source of diagnostic error ( Berlin, 2014 ; Krupinski et al., 2012 ). In addition, incomplete or incorrect patient information, as well as insufficient sharing of patient information, may lead to the use of an inadequate imaging protocol, an incorrect interpretation of imaging results, or the selection of an inappropriate imaging test by a referring clinician. Referring clinicians often struggle with selecting the appropriate imaging test, in part because of the large number of available imaging options and gaps in the teaching of radiology in medical schools. Although consensus-based guidelines (e.g., the various “appropriateness criteria” published by the American College of Radiology [ACR]) are available to help select imaging tests for many conditions, these guidelines are often not followed. The use of clinical decision support systems at the point of care as well as direct consultations with radiologists have been proposed by the ACR as methods for improving imaging test selection ( Allen and Thorwarth, 2014 ).

There are several mechanisms for ensuring the quality of medical imaging. The Mammography Quality Standards Act (MQSA)—overseen by the Food and Drug Administration—was the first government-mandated accreditation program for any type of medical facility; it was focused on X-ray imaging for breast cancer. MQSA provides a general framework for ensuring national quality standards in facilities that perform screening mammography ( IOM, 2005 ). MQSA requires all personnel at facilities to meet initial qualifications, to demonstrate continued experience, and to complete continuing education. MQSA addresses protocol selection, image acquisition, interpretation and report generation, and the communication of results and recommendations. In addition, it provides facilities with data on diagnostic performance that can be used for benchmarking, self-monitoring, and improvement. MQSA has decreased the variability in mammography performed across the United States and improved the quality of care ( Allen and Thorwarth, 2014 ). However, the ACR noted that MQSA is complex and specified in great detail, which makes it inflexible, leading to administrative burdens and the need for extensive training of staff for implementation ( Allen and Thorwarth, 2014 ). It also focuses on only one medical imaging modality in one disease area; thus, it does not address newer screening technologies ( IOM, 2005 ). In addition, the Medicare Improvements for Patients and Providers Act (MIPPA) 3 requires that private outpatient facilities that perform CT, MRI, breast MRI, nuclear medicine, and PET exams be accredited. The requirements include personnel qualifications, image quality, equipment performance, safety standards, and quality assurance and quality control ( ACR, 2015a ). There are four CMS-designated accreditation organizations for medical imaging: ACR, the Intersocietal Accreditation Commission, The Joint Commission, and RadSite ( CMS, 2015a ). MIPPA also mandated that, beginning in 2017, ordering clinicians will be required to consult appropriateness criteria to order advanced medical imaging procedures, and the act called for a demonstration project evaluating clinician compliance with appropriateness criteria ( Timbie et al., 2014 ). In addition to these mandated activities, societies such as ACR and the Radiological Society of North America (RSNA) provide quality improvement programs and resources ( ACR, 2015b ; RSNA, 2015 ).

Referral and Consultation

Clinicians may refer to or consult with other clinicians (formally or informally) to seek additional expertise about a patient's health problem. The consult may help to confirm or reject the working diagnosis or may provide information on potential treatment options. If a patient's health problem is outside a clinician's area of expertise, he or she can refer the patient to a clinician who holds more suitable expertise. Clinicians can also recommend that the patient seek a second opinion from another clinician to verify their impressions of an uncertain diagnosis or if they believe that this would be helpful to the patient. Many groups raise awareness that patients can obtain a second opinion on their own ( AMA, 1996 ; CMS, 2015c ; PAF, 2012 ). Diagnostic consultations can also be arranged through the use of integrated practice units or diagnostic management teams ( Govern, 2013 ; Porter, 2010 ; see Chapter 4 ).

  • IMPORTANT CONSIDERATIONS IN THE DIAGNOSTIC PROCESS

The committee elaborated on several aspects of the diagnostic process which are discussed below, including

  • diagnostic uncertainty
  • population trends
  • diverse populations and health disparities
  • mental health

Diagnostic Uncertainty

One of the complexities in the diagnostic process is the inherent uncertainty in diagnosis. As noted in the committee's conceptual model of the diagnostic process, an overarching question throughout the process is whether sufficient information has been collected to make a diagnosis. This does not mean that a diagnosis needs to be absolutely certain in order to initiate treatment. Kassirer concluded that:

Absolute certainty in diagnosis is unattainable, no matter how much information we gather, how many observations we make, or how many tests we perform. A diagnosis is a hypothesis about the nature of a patient's illness, one that is derived from observations by the use of inference. As the inferential process unfolds, our confidence as [clinicians] in a given diagnosis is enhanced by the gathering of data that either favor it or argue against competing hypotheses. Our task is not to attain certainty, but rather to reduce the level of diagnostic uncertainty enough to make optimal therapeutic decisions. ( Kassirer, 1989 , p. 1489)

Thus, the probability of disease does not have to be equal to one (diagnostic certainty) in order for treatment to be justified ( Pauker and Kassirer, 1980 ). The decision to begin treatment based on a working diagnosis is informed by: (1) the degree of certainty about the diagnosis; (2) the harms and benefits of treatment; and (3) the harms and benefits of further information-gathering activities, including the impact of delaying treatment.

The risks associated with diagnostic testing are important considerations when conducting information-gathering activities in the diagnostic process. While underuse of diagnostic testing has been a long-standing concern, overly aggressive diagnostic strategies have recently been recognized for their risks ( Zhi et al., 2013 ) (see Chapter 3 ). Overuse of diagnostic testing has been partially attributed to clinicians' fear of missing something important and intolerance of diagnostic uncertainty: “I am far more concerned about doing too little than doing too much. It's the scan, the test, the operation that I should have done that sticks with me—sometimes for years. . . . By contrast, I can't remember anyone I sent for an unnecessary CT scan or operated on for questionable reasons a decade ago” ( Gawande, 2015 ). However, there is growing recognition that overly aggressive diagnostic pursuits are putting patients at greater risk for harm, and they are not improving diagnostic certainty ( Kassirer, 1989 ; Welch, 2015 ).

When considering diagnostic testing options, the harm from the procedure itself needs to be weighed against the potential information that could be gained. For some patients, the risk of invasive diagnostic testing may be inappropriate due to the risk of mortality or morbidity from the test itself (such as cardiac catheterization or invasive biopsies). In addition, the risk for harm needs to take into account the cascade of diagnostic testing and treatment decisions that could stem from a diagnostic test result. Included in these assessments are the potential for false positives and ambiguous or slightly abnormal test results that lead to further diagnostic testing or unnecessary treatment.

There are some cases in which treatment is initiated even though there is limited certainty in a working diagnosis. For example, an individual who has been exposed to a tick bite or HIV may be treated with prophylactic antibiotics or antivirals, because the risk of treatment may be felt to be smaller than the risk of harm from tick-borne diseases or HIV infection. Clinicians sometimes employ empiric treatment strategies—or the provision of treatment with a very uncertain diagnosis—and use a patient's response to treatment as an information-gathering activity to help arrive at a working diagnosis. However, it is important to note that response rates to treatment can be highly variable, and the failure to respond to treatment does not necessarily reflect that a diagnosis is incorrect. Nor does improvement in the patient's condition necessarily validate that the treatment conferred this benefit and, therefore, that the empirically tested diagnosis was in fact correct. A treatment that is beneficial for some patients might not be beneficial for others with the same condition ( Kent and Hayward, 2007 ), hence the interest in precision medicine, which is hoped to better tailor therapy to maximize efficacy and minimize toxicity ( Jameson and Longo, 2015 ). In addition, there are isolated cases where the morbidity and the mortality of a diagnostic procedure and the likelihood of disease is sufficiently high that significant therapy has been given empirically. Moroff and Pauker (1983) described a decision analysis in which a 90-year-old practicing lawyer with a new 1.5 centimeter lung nodule was deemed to have a sufficiently high risk for mortality from lung biopsy and high likelihood of malignancy that the radiation oncologists felt comfortable treating the patient empirically for suspected lung cancer.

Of major importance in the diagnostic process is the element of time. Most diseases evolve over time, and there can be a delay between the onset of disease and the onset of a patient's symptoms; time can also elapse before a patient's symptoms are recognized as a specific diagnosis ( Zwaan and Singh, 2015 ). Some diagnoses can be determined in a very short time frame, while months may elapse before other diagnoses can be made. This is partially due to the growing recognition of the variability and complexity of disease presentation. Similar symptoms may be related to a number of different diagnoses, and symptoms may evolve in different ways as a disease progresses; for example, a disease affecting multiple organs may initially involve symptoms or signs from a single organ. The thousands of different diseases and health conditions do not present in thousands of unique ways; there are only a finite number of symptoms with which a patient may present. At the outset, it can be very difficult to determine which particular diagnosis is indicated by a particular combination of symptoms, especially if symptoms are nonspecific, such as fatigue. Diseases may also present atypically, with an unusual and unexpected constellation of symptoms ( Emmett, 1998 ).

Adding to the complexity of the time-dependent nature of the diagnostic process are the numerous settings of care in which diagnosis occurs and the potential involvement of multiple settings of care within a single diagnostic process. Henriksen and Brady noted that this process—for patients, their families, and clinicians alike—can often feel like “a disjointed journey across confusing terrain, aided or impeded by different agents, with no destination in sight and few landmarks along the way” ( Henriksen and Brady, 2013 , p. ii2).

Some diagnoses may be more important to establish immediately than others. These include diagnoses that can lead to significant patient harm if not recognized, diagnosed, and treated early, such as anthrax, aortic dissection, and pulmonary embolism. Sometimes making a timely diagnosis relies on the fast recognition of symptoms outside of the health care setting (e.g., public awareness of stroke symptoms can help improve the speed of receiving medical help and increase the chances of a better recovery) ( National Stroke Association, 2015 ). In these cases, the benefit of treating the disease promptly can greatly exceed the potential harm from unnecessary treatment. Consequently, the threshold for ordering diagnostic testing or for initiating treatment becomes quite low for such health problems ( Pauker and Kassirer, 1975 , 1980 ). In other cases, the potential harm from rapidly and unnecessarily treating a diagnosed condition can lead to a more conservative (or higher-threshold) approach in the diagnostic process.

Population Trends

Population trends, such as the aging of the population, are adding significant complexity to the diagnostic process and require clinicians to consider such complicating factors in diagnosis as comorbidity, polypharmacy and attendant medication side effects, as well as disease and medication interactions ( IOM, 2008 , 2013b ). Diagnosis can be especially challenging in older patients because classic presentations of disease are less common in older adults ( Jarrett et al., 1995 ). For example, infections such as pneumonia or urinary tract infections often do not present in older patients with fever, cough, and pain but rather with symptoms such as lethargy, incontinence, loss of appetite, or disruption of cognitive function ( Mouton et al., 2001 ). Acute myocardial infarction (MI) may present with fatigue and confusion rather than with typical symptoms such as chest pain or radiating arm pain ( Bayer et al., 1986 ; Qureshi et al., 2000 ; Rich, 2006 ). Sensory limitations in older adults, such as hearing and vision impairments, can also contribute to challenges in making diagnoses ( Campbell et al., 1999 ). Physical illnesses often present with a change in cognitive status in older individuals without dementia ( Mouton et al., 2001 ). In older adults with mild to moderate dementia, such illnesses can manifest with worsening cognition. Older patients who have multiple comorbidities, medications, or cognitive and functional impairments are more likely to have atypical disease presentations, which may increase the risk of experiencing diagnostic errors ( Gray-Miceli, 2008 ).

Diverse Populations and Health Disparities

Communicating with diverse populations can also contribute to the complexity of the diagnostic process. Language, health literacy, and cultural barriers can affect clinician–patient encounters and increase the potential for challenges in the diagnostic process ( Flores, 2006 ; IOM, 2003 ; The Joint Commission, 2007 ). There are indications that biases influence diagnosis; one well-known example is the differential referral of patients for cardiac catheterization by race and gender ( Schulman et al., 1999 ). In addition, women are more likely than men to experience a missed diagnosis of heart attack, a situation that has been partly attributed to real and perceived gender biases, but which may also be the result of physiologic differences, as women have a higher likelihood of presenting with atypical symptoms, including abdominal pain, shortness of breath, and congestive heart failure ( Pope et al., 2000 ).

Mental Health

Mental health diagnoses can be particularly challenging. Mental health diagnoses rely on the Diagnostic and Statistical Manual of Mental Disorders (DSM); each diagnosis in the DSM includes a set of diagnostic criteria that indicate the type and length of symptoms that need to be present, as well as the symptoms, disorders, and conditions that cannot be present, in order to be considered for a particular diagnosis ( APA, 2015 ). Compared to physical diagnoses, many mental health diagnoses rely on patient reports and observation; there are few biological tests that are used in such diagnoses ( Pincus, 2014 ). A key challenge can be distinguishing physical diagnoses from mental health diagnoses; sometimes physical conditions manifest as psychiatric ones, and vice versa ( Croskerry, 2003a ; Hope et al., 2014 ; Pincus, 2014 ; Reeves et al., 2010 ). In addition, there are concerns about missing psychiatric diagnoses, as well as overtreatment concerns ( Bor, 2015 ; Meyer and Meyer, 2009 ; Pincus, 2014 ). For example, clinician biases toward older adults can contribute to missed diagnoses of depression, because it may be perceived that older adults are likely to be depressed, lethargic, or have little interest in interactions. Patients with mental health–related symptoms may also be more vulnerable to diagnostic errors, a situation that is attributed partly to clinician biases; for example, clinicians may disregard symptoms in patients with previous diagnoses of mental illness or substance abuse and attribute new physical symptoms to a psychological cause ( Croskerry, 2003a ). Individuals with health problems that are difficult to diagnose or those who have chronic pain may also be more likely to receive psychiatric diagnoses erroneously.

  • CLINICAL REASONING AND DIAGNOSIS

Accurate, timely, and patient-centered diagnosis relies on proficiency in clinical reasoning, which is often regarded as the clinician's quintessential competency. Clinical reasoning is “the cognitive process that is necessary to evaluate and manage a patient's medical problems” ( Barrows, 1980 , p. 19). Understanding the clinical reasoning process and the factors that can impact it are important to improving diagnosis, given that clinical reasoning processes contribute to diagnostic errors ( Croskerry, 2003a ; Graber, 2005 ). Health care professionals involved in the diagnostic process have an obligation and ethical responsibility to employ clinical reasoning skills: “As an expanding body of scholarship further elucidates the causes of medical error, including the considerable extent to which medical errors, particularly in diagnostics, may be attributable to cognitive sources, insufficient progress in systematically evaluating and implementing suggested strategies for improving critical thinking skills and medical judgment is of mounting concern” ( Stark and Fins, 2014 , p. 386). Clinical reasoning occurs within clinicians' minds (facilitated or impeded by the work system) and involves judgment under uncertainty, with a consideration of possible diagnoses that might explain symptoms and signs, the harms and benefits of diagnostic testing and treatment for each of those diagnoses, and patient preferences and values.

The current understanding of clinical reasoning is based on the dual process theory, a widely accepted paradigm of decision making. The dual process theory integrates analytical and non-analytical models of decision making (see Box 2-4 ). Analytical models (slow system 2) involve a conscious, deliberate process guided by critical thinking ( Kahneman, 2011 ). Nonanalytical models (fast system 1) involve unconscious, intuitive, and automatic pattern recognition ( Kahneman, 2011 ).

Models of Clinical Reasoning.

Fast system 1 (nonanalytical, intuitive) automatic processes require very little working memory capacity. They are often triggered by stimuli or result from overlearned associations or implicitly learned activities. 4 Examples of system 1 processes include the ability to recognize human faces ( Kanwisher and Yovel, 2006 ), the diagnosis of Lyme disease from a bull's-eye rash, or decisions based on heuristics (mental shortcuts), intuition, or repeated experiences.

In contrast, slow system 2 (reflective, analytical) processing places a heavy load on working memory and involves hypothetical and counterfactual reasoning ( Evans and Stanovich, 2013 ; Stanovich and Toplak, 2012 ). System 2 processing requires individuals to generate mental models of what should or should not happen in particular situations, in order to test possible actions or to explore alternative causes of events ( Stanovich, 2009 ). Hypothetical thinking occurs when one reasons about what should occur if some condition held: For example, if this patient has diabetes, then the blood sugar level should exceed 126 mg/dl after an 8-hour fast, or if prescribed a diabetes medication, the sugar level should improve. Counterfactual reasoning occurs when one thinks about what should occur if the situation differed from how it actually is. The deliberate, conscious, and reflective nature of both hypothetical and counterfactual reasoning illustrates the analytical nature of system 2.

Heuristics—mental shortcuts or cognitive strategies that are automatically and unconsciously employed—are particularly important for decision making ( Gigerenzer and Goldstein, 1996 ). Heuristics can facilitate decision making but can also lead to errors, especially when patients present with atypical symptoms ( Cosmides and Tooby, 1996 ; Gigerenzer, 2000 ; Kahneman, 2011 ; Klein, 1998 ; Lipshitz et al., 2001 ; McDonald, 1996 ). When a heuristic fails, it is referred to as a cognitive bias. Cognitive biases, or predispositions to think in a way that leads to failures in judgment, can also be caused by affect and motivation ( Kahneman, 2011 ). Prolonged learning in a regular and predictable environment increases the success-fulness of heuristics, whereas uncertain and unpredictable environments are a chief cause of heuristic failure ( Kahneman, 2011 ; Kahneman and Klein, 2009 ). There are many heuristics and biases that affect clinical reasoning and decision making (see Table 2-2 for medical and nonmedical examples). Additional examples of heuristics and biases that affect decision making and the potential for diagnostic errors are described below ( Croskerry, 2003b ):

TABLE 2-2. Examples of Heuristics and Biases That Influence Decision Making.

Examples of Heuristics and Biases That Influence Decision Making.

  • The representativeness heuristic answers the question, “how likely is it that this patient has a particular disease?” by assessing how typical the patient's symptoms are for that disease. If the symptoms are highly typical (e.g., fever and nausea after contact with an individual from West Africa with Ebola virus), then it is likely the patient will be diagnosed as having that condition (e.g., Ebola virus infection). The representativeness bias refers to the tendency to make decisions based on a typical case, even when this may lead to an incorrect judgment. The representativeness bias helps to explain why an incorrect diagnosis (e.g., a patient diagnosed as not having Ebola virus infection) is made when presenting symptoms are atypical (e.g., no fever or nausea after contact with a person from West Africa).
  • Base-rate neglect describes the tendency to ignore the prevalence of a disease in determining a diagnosis. For example, a clinician may think the diagnosis is acid reflux because it is a prevalent condition, even though it is actually an MI, which can present with similar symptoms (e.g., chest pain), but is less likely.
  • The overconfidence bias reflects the universal tendency to believe that we know more than we do. This bias encourages individuals to diagnose a disease based on incomplete information; too much faith is placed in one's opinion rather than on carefully gathering evidence. This bias is especially likely to develop if clinicians do not have feedback on their diagnostic performance.
  • Psych-out errors describe the increased susceptibility of people with mental illnesses to clinician biases and heuristics due to their mental health conditions. Patients with mental health issues may have new physical symptoms that are not considered seriously because their clinicians attribute them to their mental health issues. Patients with physical symptoms that mimic mental illnesses (hypoxia, delirium, metabolic abnormalities, central nervous infections, and head injuries) may also be susceptible to these errors.

In addition to cognitive biases, research suggests that fallacies in reasoning, ethical violations, and financial and nonfinancial conflicts of interest can influence medical decision making ( Seshia et al., 2014a , b ). These factors, collectively referred to as “cognitive biases plus,” have been identified as potentially undermining the evidence that informs clinical decision making ( Seshia et al., 2014a , b ).

The interaction between fast system 1 and slow system 2 remains controversial. Some hold that these processes are constantly occurring in parallel and that any conflicts are resolved as they arise. Others have argued that system 1 processes generate an individual's default response and that system 2 processes may or may not intervene and override system 1 processing ( Evans and Stanovich, 2013 ; Kahneman, 2011 ). When system 2 overrides system 1, this can lead to improved decision making, because engaging in analytical reasoning may correct for inaccuracies. It is important to note that slow system 2 processing does not guarantee correct decision making. For instance, clinicians with an inadequate knowledge base may not have the information necessary to make a correct decision. There are some instances when system 1 processing is correct, and the override from system 2 can contribute to incorrect decision making. However, when system 1 overrides system 2 processing, this can also result in irrational decision making.

Intervention by system 2 is likely to occur in novel situations when the task at hand is difficult; when an individual has minimal knowledge or experience ( Evans and Stanovich, 2013 ; Kahneman, 2011 ); or when an individual deliberately employs strategies to overcome known biases ( Croskerry et al., 2013 ). Monitoring and intervention by system 2 on system 1 is unlikely to catch every failure because it is inefficient and would require sustained vigilance, given that system 1 processing often leads to correct solutions ( Kahneman, 2011 ). Factors that affect working memory can impede the ability of system 2 to monitor and, when necessary, intervene on system 1 processes ( Croskerry, 2009b ). For example, if clinicians are tired or distracted by elements in the work system, they may fail to recognize when a decison provided by system 1 processing needs to be reconsidered ( Croskerry, 2009b ).

System 1 and system 2 perform optimally in different types of clinical practice settings. System 1 performs best in highly reliable and predictable environments but falls short in uncertain and irregular settings ( Kahneman and Klein, 2009 ; Stanovich, 2009 ). System 2 performs best in relaxed and unhurried environments.

Dual Process Theory and Diagnosis

This section applies the dual process theory of clinical reasoning to the diagnostic process ( Croskerry, 2009a , b ; Norman and Eva, 2010 ; Pelaccia et al., 2011 ). Croskerry and colleagues provide a framework for understanding the cognitive activities that occur in clinicians as they iterate through information gathering, information integration and interpretation, and determining a working diagnosis ( Croskerry et al., 2013 ) (see Figure 2-2 ).

The dual process model of diagnostic decision making. When a patient presents to a clinician, the initial data include symptoms and signs of disease, which can range from single characteristics of disease to illness scripts. If the symptoms and signs (more...)

When patients present, clinicians gather information and compare that information with their knowledge about various diseases. This can include comparing a patient's signs and symptoms with clinicians' mental models of diseases (or information about diseases that is stored in memory as exemplars, prototypes, or illness scripts; see Box 2-4 ). This initial pattern matching is an instance of fast system 1 processing. If a sufficiently unique match occurs, then a diagnosis may be made without involvement of slow system 2.

However, some symptoms or signs may not be recognized or they may trigger mental models for several diseases at once. When this happens, slow system 2 processing may be engaged, and the clinician will continue to gather, integrate, and interpret potentially relevant information until a working diagnosis is generated and communicated to the patient. When this process triggers pattern matches for several mental models of disease, a differential diagnosis is developed. At this point, the diagnostic process shifts to slow system 2 analytical reasoning. Based on their knowledge base, clinicians then use deductive reasoning: If this patient has disease A, what clinical history and physical examination findings might be expected, and does the patient have them? This process is repeated for each condition in the differential diagnosis and may be augmented by additional sources of information, such as diagnostic testing, further history gathering or physical examination, or referral or consultation. The cognitive process of reassessing the probability assigned to each potential diagnosis involves inductive reasoning, 5 or going from observed signs and symptoms to the likelihood of each disease to determine which hypothesis is most likely ( Goodman, 1999 ). This can help refine and narrow the differential diagnosis. Further information gathering activities or treatment could provide greater certainty regarding a working diagnosis or suggest that alternative diagnoses be considered. Throughout this process, clinicians need to communicate with patients about the working diagnosis and the degree of certainty involved.

Task complexity and expertise affect which cognitive system is dominantly employed in the diagnostic process. System 1 processing is more likely to be used when patients present with typical signs and symptoms of disease. However, system 2 processing is likely to intervene in situations marked by novelty and difficulty, when patients present with atypical signs and symptoms, or when clinicians lack expertise ( Croskerry, 2009b ; Evans and Stanovich, 2013 ). Novice clinicians and medical students are more likely to rely on analytical reasoning throughout the diagnostic process compared to experienced clinicians ( Croskerry, 2009b ; Elstein and Schwartz, 2002 ; Kassirer, 2010 ; Norman, 2005 ). Expert clinicians possess better developed mental models of diseases, which support more reliable pattern matching (system 1 processes) ( Croskerry, 2009b ). As a clinician accumulates experience, the repetition of system 2 processing can expand pattern matching possibilities by building and storing in memory mental models for additional diseases that can be triggered by patient signs and symptoms. The ability to create and develop mental models through repetition explains why expert clinicians are more likely to rely on pattern recognition when making diagnoses than are novices—continuous engagement with disease conditions allows the expert to develop more reliable mental models of disease—by retaining more exemplars, creating more nuanced prototypes, or developing more detailed illness scripts.

The way in which information is processed through system 1 and system 2 informs a clinician's subsequent diagnostic performance. Figure 2-3 illustrates the concept of calibration, or the process of a clinician becoming aware of his or her diagnostic abilities and limitations through feedback. Feedback mechanisms—both in educational settings (see Chapter 4 ) and in learning health care systems (see Chapter 6 )—allow clinicians to compare their patients' ultimate diagnoses with the diagnoses that they provided to those patients. Calibration enables clinicians to assess their diagnostic accuracy and improve their future performance.

Calibration in the diagnostic process. Favorable or unfavorable information about a clinician's diagnostic performance provides good feedback and improves clinician calibration. When a patient's diagnostic outcome is unknown, it will be treated as favorable (more...)

Work system factors influence diagnostic reasoning, including diagnostic team members and tasks, technologies and tools, organizational characteristics, the physical environment, and the external environment. For example, Chapter 6 describes how the physical environment, including lighting, noise, and layout, can influence clinical reasoning. Chapter 5 discusses how health IT can improve or degrade clinical reasoning, depending on the usability of health IT (including clinical decision support), its integration into clinical workflow, and other factors. Box 2-5 describes how certain individual characteristics of diagnostic team members can affect clinical reasoning.

Individual Characteristics That Influence Clinical Reasoning.

Probabilistic (Bayesian) Reasoning

As described above, the diagnostic process involves initial information gathering that leads to a working diagnosis. The process of ruling in or ruling out a diagnosis involves probabilistic reasoning as findings are integrated and interpreted. Probabilistic (or Bayesian) reasoning provides a formal method to avoid some cognitive biases when integrating and interpreting information. For instance, when patients present with typical symptoms but the disease is rare (e.g., the classic triad of headache, sweating, and rapid heart rate for pheochromocytoma), base rate neglect and the representativeness bias may lead clinicians to overestimate the likelihood of pheochromocytoma among patients presenting with high blood pressure. Using Bayesian reasoning and formally revising probabilities of the various diseases under consideration helps clinicians avoid these errors. Clinicians can then decide whether to pursue additional information gathering or treatment based on an accurate estimate of the likelihood of disease, the harms and benefits of treatment, and patient preferences ( Kassirer et al., 2010 ; Pauker and Kassirer, 1980 ).

Probabilistic reasoning is most often considered in the context of diagnostic testing, but the presence or absence of specific signs and symptoms can also help to rule in or rule out diseases. The likelihood of a positive finding (the presence of signs or symptoms or a positive test) when disease is present is referred to as sensitivity. The likelihood of a negative finding (the absence of symptoms, signs, or a negative test) when a disease is absent is referred to as specificity. If a sign, symptom, or test is always positive in the presence of a particular disease (100 percent sensitivity), then the absence of that symptom, sign, or test rules out disease (e.g., absence of pain or stiffness means the patient does not have polymyalgia rheumatica). If a sign, symptom, or test is always negative in the absence of a particular disease (100 percent specificity), then the presence of that symptom, sign, or test rules in disease (e.g., all patients with Kayser–Fleischer rings have Wilson's disease; all patients with Koplik's spots have measles).

However, nearly all signs, symptoms, or test results are neither 100 percent sensitive or specific. For example, studies suggest exceptions for findings such as Kayser–Fleischer rings with other causes of liver disease ( Frommer et al., 1977 ; Lipman and Deutsch, 1990 ) or Koplik's spots with parvovirus B19 or echovirus ( Suringa et al., 1970 ) and even for Reed-Sternburg cells for Hodgkin's lymphoma ( Azar, 1975 ).

Bayes' theorem provides a framework for clinicians to revise the probability of disease, given disease prevalence, as well as the presence or absence of clinical findings or positive or negative test results ( Grimes and Schulz, 2005 ; Griner et al., 1981 ; Kassirer et al., 2010 ; Pauker and Kassirer, 1980 ). Bayesian calculators are available to facilitate these probability revision analyses ( Simel and Rennie, 2008 ). Box 2-6 works through two examples of probabilistic reasoning. While most clinicians will not formally calculate probabilities, the logical principles behind Bayesian reasoning can help clinicians consider the trade-offs involved in further information gathering, decisions about treatment, or evaluating clinically ambiguous cases ( Kassirer et al., 2010 ). The committee's recommendation on improving diagnostic competencies includes a focus on diagnostic test ordering and subsequent decision making, which relies on the principles of probabilistic reasoning.

Examples of Probabilistic (Bayesian) Reasoning.

  • THE DIAGNOSTIC EVIDENCE BASE AND CLINICAL PRACTICE

Advances in biology and medicine have led to improvements in prevention, diagnosis, and treatment, with a deluge of innovations in diagnostic testing ( IOM, 2000 , 2013a ; Korf and Rehm, 2013 ; Lee and Levy, 2012 ). The rising complexity and volume of these advances, coupled with clinician time constraints and cognitive limitations, have outstripped human capacity to apply this new knowledge ( IOM, 2011a , 2013a ; Marois and Ivanoff, 2005 ; Miller, 1956 ; Ostbye et al., 2005 ; Tombu et al., 2011 ; Yarnall et al., 2003 ). The Institute of Medicine report Best Care Lower Cost: The Path to Continuously Learning Health Care in America concluded that “diagnostic and treatment options are expanding and changing at an accelerating rate, placing new stresses on clinicians and patients, as well as potentially impacting the effectiveness and efficiency of care delivery” ( IOM, 2013a , p. 10). The sheer number of potential diagnoses illustrates this complexity: There are thousands of diseases and related health conditions categorized in the National Library of Medicine's medical subjects headings system and around 13,000 in International Classification of Diseases , 9th Edition , with new conditions and diseases added every year ( Medicaid.gov, 2015 ).

With the rapidly increasing number of published scientific articles on health (see Figure 2-4 ), health care professionals have difficulty keeping up with the breadth and depth of knowledge in their specialties. For example, to remain up to date, primary care clinicians would need to read for an estimated 627.5 hours per month ( Alper et al., 2004 ). McGlynn and colleagues (2003) found that Americans receive only about half of recommended care, including recommended diagnostic processes. Thus, clinicians need approaches to ensure they know the evidence base and are well-equipped to deliver care that reflects the most up-to-date information. One of the ways that this is accomplished is through team-based care; by moving from individuals to teams of health care professionals, patients can benefit from a broader set of resources and expertise to support care ( Gittell et al., 2010 ) (see Chapter 4 ). In addition, systematic reviews and clinical practice guidelines (CPGs) help synthesize available information in order to inform clinical practice decision making ( IOM, 2011a , b ).

Number of journal articles published on health care topics per year from 1970 to 2010. Publications have increased steadily over 40 years. SOURCE: IOM, 2013a.

CPGs came into prominence partly in response to studies that found excessive variation in diagnostic and treatment-related care practices, indicating that inappropriate care was occurring ( Chassin et al., 1987 ; IOM, 1990 ; Kosecoff et al., 1987 ; Lin et al., 2008 ; Song et al., 2010 ). CPGs are defined as “statements that include recommendations intended to optimize patient care that are informed by a systematic review of the evidence and an assessment of the benefits and harms of alternative care options” ( IOM, 2011a , p. 4). CPGs can include diagnostic criteria for specific conditions as well as approaches to information gathering, such as conducting a clinical history and interview, the physical exam, diagnostic testing, and consultations.

CPGs translate knowledge into clinical care decisions, and adherence to evidence-based guideline recommendations can improve health care quality and patient outcomes ( Bhatt et al., 2004 ; IOM, 2011a ; Peterson et al., 2006 ). However, there have been a number of challenges to the development and use of CPGs in clinical practice ( IOM, 2011a , 2013a , b ; Kahn et al., 2014 ; Timmermans and Mauck, 2005 ). Two of the primary challenges are the inadequacy of the evidence base supporting CPGs and determining the applicability of guidelines for individual patients ( IOM, 2011a , 2013b ). For example, individual patient preferences for possible health outcomes may vary, and with the growing prevalence of chronic disease, patients often have comorbidities or competing causes of mortality that need to be considered. CPGs may not factor in these patient-specific variables ( Boyd et al., 2005 ; Mulley et al., 2012 ; Tinetti et al., 2004 ). In addition, the majority of scientific evidence about any diagnostic test typically is focused on test accuracy and not on the impact of the test on patient outcomes ( Brozek et al., 2009 ; Trikalinos et al., 2009 ). This makes it difficult to develop guidelines that inform clinicians about the role of diagnostic tests within the diagnostic process and about how these tests can influence the path of care and health outcomes for a patient ( Gopalakrishna et al., 2014 ; Hsu et al., 2011 ). Furthermore, diagnosis is generally not a primary focus of CPGs; diagnostic testing guidelines typically account for a minority of recommendations and often have lower levels of evidence supporting them than treatment-related CPGs ( Tricoci et al., 2009 ). The adoption of available clinical practice guideline recommendations into practice remains suboptimal due to concerns about the trustworthiness of the guidelines as well as the existence of varying and conflicting guidelines ( Ferket et al., 2011 ; Han et al., 2011 ; IOM, 2011a ; Lenzer et al., 2013 ; Pronovost, 2013 ).

Health care professional societies have also begun to develop appropriate use or appropriateness criteria as a way of synthesizing the available scientific literature and expert opinion to inform patient-specific decision making ( Fitch et al., 2001 ). With the growth of diagnostic testing and substantial geographic variation in the utilization of these tools (due in part to the limitations in the evidence base supporting their use), health care professional societies have developed appropriate use criteria aimed at better matching patients to specific health care interventions ( Allen and Thorwarth, 2014 ; Patel et al., 2005 ).

Checklists are another approach that has been implemented to improve the safety of care by, for example, preventing health care–acquired infections or errors in surgical care. Checklists have also been proposed to improve the diagnostic process ( Ely et al., 2011 ; Schiff and Leape, 2012 ; Sibbald et al., 2013 ). Developing checklists for the diagnostic process may be a significant undertaking; thus far, checklists have been developed for discrete, observable tasks, but the complexity of the diagnostic process, including the associated cognitive tasks, may represent a fundamentally different type of challenge ( Henriksen and Brady, 2013 ).

  • AAFP (American Academy of Family Physicians). About the AAFP proficiency testing program. 2015. [May 15, 2015]. www ​.aafp.org/practice-management ​/labs/about.html .
  • ACMG (American College of Medical Genetics and Genomics) Board of Directors. Points to consider in the clinical application in genomic sequencing. Genetics in Medicine. 2012; 14 (8):759–761. [ PubMed : 22863877 ]
  • ACR (American College of Radiology). Accreditation. 2015a. [May 22, 2015]. www ​.acr.org/quality-safety/accreditation .
  • ACR. Quality & safety. 2015b. [May 22, 2015]. www ​.acr.org/Quality-Safety .
  • Allen B, Thorwarth WT. Comments from the American College of Radiology. Washington, DC: 2014. (Input submitted to the Committee on Diagnostic Error in Health Care, November 5 and December 29, 2014).
  • Alper B, Hand JA, Elliott SG, Kinkade S, Hauan MJ, Onion DK, Sklar BM. How much effort is needed to keep up with the literature relevant for primary care? Journal of the Medical Library Association. 2004; 92 (4):429–437. [ PMC free article : PMC521514 ] [ PubMed : 15494758 ]
  • AMA (American Medical Association). AMA code of ethics. 1996. [March 22, 2015]. www ​.ama-assn.org/ama ​/pub/physician-resources ​/medical-ethics/code-medical-ethics ​/opinion8041.page .
  • APA (American Psychiatric Association). DSM. 2015. [May 13, 2015]. www ​.psychiatry.org/practice/dsm .
  • ASCP (American Society for Clinical Pathology). Patient access to test results. 2014. [March 16, 2015]. www ​.ascp.org/Advocacy ​/Patient-Access-to-Test-Results.html .
  • AvaMedDx. Introduction to molecular diagnostics: The essentials of diagnostics series. 2013. [May 22, 2015]. http://advameddx ​.org ​/download/files/AdvaMedDx ​_DxInsights_FINAL(2).pdf .
  • Azar HA. Significance of the Reed-Sternberg cell. Human Pathology. 1975; 6 (4):479–484. [ PubMed : 1150223 ]
  • Barrows HS. Problem-based learning: An approach to medical education. New York: Springer; 1980.
  • Barrows HS, Norman GR, Neufeld VR, Feightner JW. The clinical reasoning of randomly selected physicians in general medical practice. Clinical & Investigative Medicine. 1982; 5 (1):49–55. [ PubMed : 7116714 ]
  • Bayer AJ, Chadha JS, Farag RR, Pathy MS. Changing presentation of myocardial infarction with increasing old age. Journal of the American Geriatrics Society. 1986; 34 (4):263–266. [ PubMed : 3950299 ]
  • Berger D. A brief history of medical diagnosis and the birth of the clinical laboratory. Part 4—Fraud and abuse, managed-care, and lab consolidation. Medical Laboratory Observer. 1999; 31 (12):38–42. [ PubMed : 11184281 ]
  • Berlin L. Radiologic errors, past, present and future. Diagnosis. 2014; 1 (1):79–84. [ PubMed : 29539959 ]
  • Berner ES, Graber ML. Overconfidence as a cause of diagnostic error in medicine. The American Journal of Medicine. 2008; 121 (5):S2–S23. [ PubMed : 18440350 ]
  • Bhatt DL, Roe MT, Peterson ED, Li Y, Chen AY, Harrington RA, Greenbaum AB, Berger PB, Cannon CP, Cohen DJ, Gibson CM, Saucedo JF, Kleiman NS, Hochman JS, Boden WE, Brindis RG, Peacock WF, Smith SC Jr., Pollack CV Jr., Gibler WB, Ohman EM. CRUSADE Investigators. Utilization of early invasive management strategies for high-risk patients with non-ST-segment elevation acute coronary syndromes: Results from the CRUSADE Quality Improvement Initiative. JAMA. 2004; 292 (17):2096–2104. [ PubMed : 15523070 ]
  • Blanchette I, Richards A. The influence of affect on higher level cognition: A review of research on interpretation, judgement, decision making and reasoning. Cognition and Emotion. 2009; 24 (4):561–595.
  • Bluth EI, Truong H, Bansal S. The 2014 ACR Commission on Human Resources Workforce Survey. Journal of the American College of Radiology. 2014; 11 (10):948–952. [ PubMed : 25131824 ]
  • Bor JS. Among the elderly, many mental illnesses go undiagnosed. Health Affairs (Millwood). 2015; 34 (5):727–731. [ PubMed : 25941272 ]
  • Bordage G, Zacks R. The structure of medical knowledge in the memories of medical students and general practitioners: categories and prototypes. Medical Education. 1984; 18 (6):406–416. [ PubMed : 6503748 ]
  • Boshuizen HPA, Schmidt HG. Clinical reasoning in the health professions. Higgs J, Jones M, Loftus S, Christensen N, editors. Oxford: Butterworth Heinemann/Elsevier; 2008. pp. 113–121. (The development of clinical reasoning expertise; Implications for teaching).
  • Boyd CM, Darer J, Boult C, Fried LP, Boult L, Wu AW. Clinical practice guidelines and quality of care for older patients with multiple comorbid diseases: Implications for pay for performance. JAMA. 2005; 294 (6):716–724. [ PubMed : 16091574 ]
  • Brozek JL, Akl EA, Jaeschke R, Lang DM, Bossuyt P, Glasziou P, Helfand M, Ueffing E, Alonso-Coello P, Meerpohl J, Phillips B, Horvath AR, Bousquet J, Guyatt GH, Schunemann HJ, Group GW. Grading quality of evidence and strength of recommendations in clinical practice guidelines: Part 2 of 3. The GRADE approach to grading quality of evidence about diagnostic tests and strategies. Allergy. 2009; 64 (8):1109–1116. [ PubMed : 19489757 ]
  • Byrnes JP, Miller DC, Schafer WD. Gender differences in risk taking: A meta-analysis. Psychological Bulletin. 1999; 125 (3):367.
  • Campbell VA, Crews JE, Moriarty DG, Zack MM, Blackman DK. Surveillance for sensory impairment, activity limitation, and health-related quality of life among older adults—United States, 1993-1997. Morbidity and Mortality Weekly Report. 1999; 48 (SS08):131–156. [ PubMed : 10634273 ]
  • CAP (College of American Pathologists). Guide to CAP proficiency testing/external quality assurance for international participants. 2013. [May 15, 2015]. www ​.cap.org/apps/docs ​/proficiency_testing ​/cap_proficiency_testing_guide.pdf .
  • CAP. Proficiency testing. 2015. [May 15, 2015]. www ​.cap.org/web/home ​/lab/proficiency-testing?_adf ​.ctrlstate=146u5nip6d ​_4&_afrLoop ​=77333689866130 .
  • Carayon P, Schoofs Hundt A, Karsh BT, Gurses AP, Alvarado CJ, Smith M, Flatley Brennan P. Work system design for patient safety: The SEIPS model. Quality & Safety in Health Care. 2006; 15 (Suppl 1):i50–i58. [ PMC free article : PMC2464868 ] [ PubMed : 17142610 ]
  • Carayon P, Wetterneck TB, Rivera-Rodriguez AJ, Hundt AS, Hoonakker P, Holden R, Gurses AP. Human factors systems approach to healthcare quality and patient safety. Applied Ergonomics. 2014; 45 (1):14–25. [ PMC free article : PMC3795965 ] [ PubMed : 23845724 ]
  • CDC (Centers for Disease Control and Prevention). Laboratory medicine: A national status report. Church, VA: The Lewin Group; Falls. 2008.
  • CDC. National hospital ambulatory medical care survey. Hyattsville, MD: Ambulatory and Hospital Care Statistics Branch, National Center for Health Statistics; 2010.
  • CDC. Clinical Laboratory Improvement Amendments (CLIA). 2014. [May 15, 2015]. www ​.cdc.gov/clia .
  • Centor RM, Witherspoon JM, Dalton HP, Brody CE, Link K. The diagnosis of strep throat in adults in the emergency room. Medical decision making: an international journal of the Society for Medical Decision Making. 1980; 1 (3):239–246. [ PubMed : 6763125 ]
  • Chassin MR, Kosecoff J, Solomon DH, Brook RH. How coronary angiography is used: Clinical determinants of appropriateness. JAMA. 1987; 258 (18):2543–2547. [ PubMed : 3312657 ]
  • CMS (Centers for Medicare & Medicaid Services). Accreditation organizations/exempt states. 2014. [November 3, 2015]. www ​.cms.gov/Regulations-and-Guidance ​/Legislation ​/CLIA/Downloads/AOList.pdf .
  • CMS. Advanced diagnostic imaging accreditation. 2015a. [May 22, 2015]. www ​.cms.gov/Medicare ​/Provider-Enrollment-and-Certification ​/MedicareProviderSupEnroll ​/AdvancedDiagnosticImagingAccreditation.html .
  • CMS. Clinical Laboratory Improvement Amendments (CLIA). 2015b. [May 15, 2015]. www ​.cms.gov/Regulations-and-Guidance ​/Legislation/CLIA/index ​.html?redirect=/clia .
  • CMS. Getting a second opinion before surgery. 2015c. [March 30, 2015]. www ​.medicare.gov/what-medicarecovers ​/part-b ​/second-opinions-before-surgery.html .
  • Cosmides L, Tooby J. Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition. 1996; 58 (1):1–73.
  • Croskerry P. The feedback sanction. Academic Emergency Medicine. 2000; 7 (11):1232–1238. [ PubMed : 11073471 ]
  • Croskerry P. The Importance of cognitive errors in diagnosis and strategies to minimize them. Academic Medicine. 2003a; 78 (8):775–780. [ PubMed : 12915363 ]
  • Croskerry P. Cognitive forcing strategies in clinical decisionmaking. Annals of Emergency Medicine. 2003b; 41 (1):110–120. [ PubMed : 12514691 ]
  • Croskerry P. Clinical cognition and diagnostic error: Applications of a dual process model of reasoning. Advances in Health Sciences Education. 2009a; 14 (Suppl 1):27–35. [ PubMed : 19669918 ]
  • Croskerry P. A universal model of diagnostic reasoning. Academic Medicine. 2009b; 84 (8):1022–1028. [ PubMed : 19638766 ]
  • Croskerry P, Musson D. Patient Safety in Emergency Medicine. Croskerry P, Cosby KS, Schenkel SM, Wears RL, editors. Philadelphia, PA: Lippincott, Williams & Wilkins; 2009. pp. 269–276. (Individual factors in patient safety).
  • Croskerry P, Norman G. Overconfidence in clinical decision making. American Journal of Medicine. 2008; 121 (5 Suppl):S24–S29. [ PubMed : 18440351 ]
  • Croskerry P, Abbass AA, Wu AW. How doctors feel: affective issues in patients' safety. Lancet. 2008; 372 (9645):1205–1206. [ PubMed : 19094942 ]
  • Croskerry P, Abbass AA, Wu AW. Emotional influences in patient safety. Journal of Patient Safety. 2010; 6 (4):199–205. [ PubMed : 21500605 ]
  • Croskerry P, Singhal G, Mamede S. Cognitive debiasing 1: Origins of bias and theory of debiasing. BMJ Quality and Safety. 2013; 22 (Suppl 2):ii58–ii64. [ PMC free article : PMC3786658 ] [ PubMed : 23882089 ]
  • Davies RH, Rees B. Include “eyeballing” the patient. BMJ. 2010; 340 (c291) [ PubMed : 20085978 ]
  • Eichbaum Q, Booth GS, Young PS, editors; Laposata M, editor. Transfusion medicine: Quality in laboratory diagnosis. New York: Demos Medical Publishing; 2012.
  • Elstein AS, Bordage G. Professional judgment: A reader in clinical decision making. Dowie J, Elstein A, editors. New York: Cambridge University Press; 1988. pp. 109–129. (Psychology of clinical reasoning).
  • Elstein AS, Schwartz A. Clinical problem solving and diagnostic decision making: Selective review of the cognitive literature. BMJ. 2002; 324 (7339):729–732. [ PMC free article : PMC1122649 ] [ PubMed : 11909793 ]
  • Elstein AS, Shulman L, Sprafka S. Medical problem solving: An analysis of clinical reasoning. Cambridge, MA: Harvard University Press; 1978.
  • Elstein AS, Shulman LS, Sprafka SA. Medical problem solving: A ten-year retrospective. Evaluation & the Health Professions. 1990; 13 (1):5–36.
  • Ely JW, Graber ML, Croskerry P. Checklists to reduce diagnostic errors. Academic Medicine. 2011; 86 (3):307–313. [ PubMed : 21248608 ]
  • Emmett KR. Nonspecific and atypical presentation of disease in the older patient. Geriatrics. 1998; 53 (2):50–52. 58-60. [ PubMed : 9484285 ]
  • Epner PL, Gans JE, Graber ML. When diagnostic testing leads to harm: A new outcomes-based approach for laboratory medicine. BMJ Quality and Safety. 2013; 22 (Suppl 2):ii6–ii10. [ PMC free article : PMC3786651 ] [ PubMed : 23955467 ]
  • European Society of Radiology. The future role of radiology in healthcare. Insights into Imaging. 2010; 1 (1):2–11. [ PMC free article : PMC3259353 ] [ PubMed : 22347897 ]
  • Eva KW. The aging physician: Changes in cognitive processing and their impact on medical practice. Academic Medicine. 2002; 77 (10 Suppl):S1–S6. [ PubMed : 12377689 ]
  • Eva KW, Cunnington JPW. The difficulty with experience: Does practice increase susceptibility to premature closure? Journal of Continuing Education in the Health Professions. 2006; 26 (3):192–198. [ PubMed : 16986144 ]
  • Eva K, Link C, Lutfey K, McKinlay J. Swapping horses midstream: Factors related to physicians changing their minds about a diagnosis. Academic Medicine. 2010; 85 :1112–1117. [ PMC free article : PMC3701113 ] [ PubMed : 20592506 ]
  • Evans JP, Watson MS. Genetic testing and FDA regulation: Overregulation threatens the emergence of genomic medicine. JAMA. 2015; 313 (7):669–670. [ PubMed : 25560537 ]
  • Evans JSBT, Stanovich KE. Dual-process theories of higher cognition: Advancing the debate. Perspectives on Psychological Science. 2013; 8 (3):223–241. [ PubMed : 26172965 ]
  • FDA (Food and Drug Administration). In vitro diagnostics. 2014a. [May 15, 2015]. www ​.fda.gov/MedicalDevices ​/ProductsandMedicalProcedures ​/InVitroDiagnostics ​/default.htm .
  • FDA. Laboratory developed tests. 2014b. [May 15, 2015]. www ​.fda.gov/MedicalDevices ​/ProductsandMedicalProcedures ​/InVitroDiagnostics ​/ucm407296.htm .
  • Ferket BS, Genders TS, Colkesen EB, Visser JJ, Spronk S, Steyerberg EW, Hunink MG. Systematic review of guidelines on imaging of asymptomatic coronary artery disease. Journal of the American College of Cardiology. 2011; 57 (15):1591–1600. [ PubMed : 21474039 ]
  • Fitch K, Bernstein SJ, Aguilar MD, Burnand B, LaCalle JR, Lazaro P, Loo Mvh, McDonnell J, Vader J, Kahan JP. The RAND/UCLA appropriateness method user 's manual. 2001. [May 13, 2015]. www ​.rand.org/pubs/monograph_reports ​/MR1269 .
  • Flores G. Language barriers to health care in the United States. New England Journal of Medicine. 2006; 355 (3):229–231. [ PubMed : 16855260 ]
  • Frommer D, Morris J, Sherlock S, Abrams J, Newman S. Kayser-Fleischer-like rings in patients without Wilson's disease. Gastroenterology. 1977; 72 (6):1331–1335. [ PubMed : 558126 ]
  • Gandhi JS. Re: William Osler: A life in medicine: Book review. BMJ. 2000; 321 :1087.
  • Gawande A. Overkill. The New Yorker. 2015 May 11; [July 13, 2015]; www ​.newyorker.com/magazine ​/2015/05/11/overkill-atul-gawande .
  • Gigerenzer G. Adaptive thinking: Rationality in the real world. New York: Oxford University Press; 2000.
  • Gigerenzer G. HIV screening: Helping clinicians make sense of test results to patients. BMJ. 2013; 347 :f5151. [ PubMed : 23965510 ]
  • Gigerenzer G, Edwards A. Simple tools for understanding risks: From innumeracy to insight. BMJ. 2003; 327 (7417):741–744. [ PMC free article : PMC200816 ] [ PubMed : 14512488 ]
  • Gigerenzer G, Goldstein DG. Reasoning the fast and frugal way: Models of bounded rationality. Psychology Review. 1996; 103 :650–669. [ PubMed : 8888650 ]
  • Gittell JH, Seidner R, Wimbush J. A relational model of how high-performance work systems work. Organization Science. 2010; 21 (2):490–506.
  • Goodman SN. Toward evidence-based medical statistics. 1: The P value fallacy. Annals of Internal Medicine. 1999; 130 (12):995–1004. [ PubMed : 10383371 ]
  • Gopalakrishna G, Mustafa RA, Davenport C, Scholten RJPM, Hyde C, Brozek J, Schunemann HJ, Bossuyt PMM, Leeflang MMG, Langendam MW. Applying Grading of Recommendations Assessment, Development and Evaluation (GRADE) to diagnostic tests was challenging but doable. Journal of Clinical Epidemiology. 2014; 67 (7):760–768. [ PubMed : 24725643 ]
  • Govern P. Diagnostic management efforts thrive on teamwork. Vanderbilt University Medical Center Reporter. 2013 March 7; [February 11, 2015]; http://news ​.vanderbilt ​.edu/2013/03/diagnosticmanagement-efforts-thrive-on-teamwork .
  • Graber ML. Diagnostic error in internal medicine. Archives of Internal Medicine. 2005; 165 (13):1493–1499. [ PubMed : 16009864 ]
  • Gray-Miceli D. Modification of assessment and atypical presentation in older adults with complex illness. New York: The John A. Hartford Foundation Institute for Geriatric Nursing; 2008.
  • Grimes DA, Schulz KF. Refining clinical diagnosis with likelihood ratios. Lancet. 2005; 365 (9469):1500–1505. [ PubMed : 15850636 ]
  • Griner PF, Mayewski RJ, Mushlin AI, Greenland P. Selection and interpretation of diagnostic tests and procedures: Principles and applications. Annals of Internal Medicine. 1981; 94 (4 Pt 2):557–592. [ PubMed : 6452080 ]
  • Groen GJ, Patel VL. Medical problem-solving: Some questionable assumptions. Medical Education. 1985; 19 (2):95–100. [ PubMed : 3982318 ]
  • Gunderman RB. The medical community's changing vision of the patient: The importance of radiology. Radiology. 2005; 234 (2):339–342. [ PubMed : 15670989 ]
  • Han PK, Klabunde CN, Breen N, Yuan G, Grauman A, Davis WW, Taplin SH. Multiple clinical practice guidelines for breast and cervical cancer screening: perceptions of U.S. primary care physicians. Medical Care. 2011; 49 (2):139–148. [ PMC free article : PMC4207297 ] [ PubMed : 21206294 ]
  • Hendee WR, Becker GJ, Borgstede JP, Bosma J, Casarella WJ, Erickson BA, Maynard CD, Thrall JH, Wallner PE. Addressing overutilization in medical imaging. Radiology. 2010; 257 (1):240–245. [ PubMed : 20736333 ]
  • Henriksen K, Brady J. The pursuit of better diagnostic performance: A human factors perspective. BMJ Quality and Safety. 2013; 22 (Suppl 2):ii1–ii5. [ PMC free article : PMC3786636 ] [ PubMed : 23704082 ]
  • HFAP (Healthcare Facilities Accreditation Program). Notice of HFAP approval by CMS. 2015. [May 15, 2015]. www ​.hfap.org/AccreditationPrograms ​/LabsCMS.aspx .
  • Hickner J, Thompson PJ, Wilkinson T, Epner P, Shaheen M, Pollock AM, Lee J, Duke CC, Jackson BR, Taylor JR. Primary care physicians' challenges in ordering clinical laboratory tests and interpreting results. Journal of the American Board of Family Medicine. 2014; 27 (2):268–274. [ PubMed : 24610189 ]
  • Hoffman KA, Aitken LM, Duffield C. A comparison of novice and expert nurses' cue collection during clinical decision-making: Verbal protocol analysis. International Journal of Nursing Studies. 2009; 46 (10):1335–1344. [ PubMed : 19555954 ]
  • Hollensead SC, Lockwood WB, Elin RJ. Errors in pathology and laboratory medicine: Consequences and prevention. Journal of Surgical Oncology. 2004; 88 (3):161–181. [ PubMed : 15562462 ]
  • Holmboe ES, Durning SJ. Assessing clinical reasoning: Moving from in vitro to in vivo. Diagnosis. 2014; 1 (1):111–117. [ PubMed : 29539977 ]
  • Hope C, Estrada N, Weir C, Teng CC, Damal K, Sauer BC. Documentation of delirium in the VA electronic health record. BMC Research Notes. 2014; 7 :208. [ PMC free article : PMC3985575 ] [ PubMed : 24708799 ]
  • Hricak H. Oncologic imaging: A guiding hand of personalized cancer care. Radiology. 2011; 259 (3):633–640. [ PubMed : 21493796 ]
  • Hsu J, Brozek JL, Terracciano L, Kreis J, Compalati E, Stein AT, Fiocchi A, Schunemann HJ. Application of GRADE: Making evidence-based recommendations about diagnostic tests in clinical practice guidelines. Implementation Science. 2011; 6 :62. [ PMC free article : PMC3126717 ] [ PubMed : 21663655 ]
  • IMV. Ready for replacement? New IMV survey finds aging MRI scanner installed base. 2014. [May 3, 2015]. www ​.imvinfo.com/user ​/documents/content_documents ​/abt_prs/2014 ​_02_03_16_51_22_809 ​_IMV_MR_Outlook_Press_Release_Jan_2014 ​.pdf .
  • IOM (Institute of Medicine). Clinical practice guidelines: Directions for a new program. Washington, DC: National Academy Press; 1990. [ PubMed : 25144032 ]
  • IOM. Medicare laboratory payment policy: Now and in the future. Washington, DC: National Academy Press; 2000. [ PubMed : 25057735 ]
  • IOM. Unequal treatment: Confronting racial and ethnic disparties in health care. Washington, DC: The National Academies Press; 2003. [ PubMed : 25032386 ]
  • IOM. Improving breast imaging quality standards. Washington, DC: The National Academies Press; 2005.
  • IOM. Cancer biomarkers: The promises and challenges of improving detection and treatment. Washington, DC: The National Academies Press; 2007.
  • IOM. Retooling for an aging America: Building the health care workforce. Washington, DC: The National Academies Press; 2008. [ PubMed : 25009893 ]
  • IOM. Evaluation of biomarkers and surrogate endpoints in chronic disease. Washington, DC: The National Academies Press; 2010. [ PubMed : 25032382 ]
  • IOM. Clinical practice guidelines we can trust. Washington, DC: The National Academies Press; 2011a. [ PubMed : 24983061 ]
  • IOM. Finding what works in health care: Standards for systematic reviews. Washington, DC: The National Academies Press; 2011b. [ PubMed : 24983062 ]
  • IOM. Evolution of translational omics: Lessons learned and the path forward. Washington, DC: The National Academies Press; 2012. [ PubMed : 24872966 ]
  • IOM. Best care at lower cost: The path to continuously learning health care in America. Washington, DC: The National Academies Press; 2013a. [ PubMed : 24901184 ]
  • IOM. Delivering high-quality cancer care: Charting a new course for a system in crisis. Washington, DC: The National Academies Press; 2013b. [ PubMed : 24872984 ]
  • IOM. Improving genetics education in graduate and continuing health professional education: Workshop summary. Washington, DC: The National Academies Press; 2015a. [ PubMed : 25674655 ]
  • IOM. Policy issues in the clinical development and use of biomarkers for molecularly targeted therapies. 2015b. [May 22, 2015]. www ​.iom.edu/Activities ​/Research/BiomarkersforMolecularlyTargetedTherapies.aspx .
  • Jameson JL, Longo DL. Precision medicine—Personalized, problematic, and promising. New England Journal of Medicine. 2015; 372 (23):2229–2234. [ PubMed : 26014593 ]
  • Jarrett PG, Rockwood K, Carver D, Stolee P, Cosway S. Illness presentation in elderly patients. Archives of Internal Medicine. 1995; 155 (10):1060–1064. [ PubMed : 7748049 ]
  • Johansen Taber KA, Dickinson BD, Wilson M. The promise and challenges of next-generation genome sequencing for clinical care. JAMA Internal Medicine. 2014; 174 (2):275–280. [ PubMed : 24217348 ]
  • Johnson-Laird PN, Oatley K. Basic emotions, rationality, and folk theory. Cognition & Emotion. 1992; 6 (3-4):201–223.
  • The Joint Commission. “What did the doctor say?” Improving health literacy to protect patient safety. 2007. [May 11, 2015]. www ​.jointcommission.org ​/What_Did_the_Doctor_Say/default.aspx .
  • The Joint Commission. Eligibility for laboratory accreditation. 2015. [May 15, 2015]. www ​.jointcommission.org ​/eligibility_for_laboratory ​_accreditation/default.aspx .
  • Jutel A. Sociology of diagnosis: A preliminary review. Sociology of Health and Illness. 2009; 31 (2):278–299. [ PubMed : 19220801 ]
  • Kahn JM, Gould MK, Krishnan JA, Wilson KC, Au DH, Cooke CR, Douglas IS, Feemster LC, Mularski RA, Slatore CG, Wiener RS. An official American thoracic society workshop report: Developing performance measures from clinical practice guidelines. Annals of the American Thoracic Society. 2014; 11 (4):S186–S195. [ PMC free article : PMC5469393 ] [ PubMed : 24828810 ]
  • Kahneman D. Thinking, fast and slow. New York: Farrar, Straus and Giroux; 2011.
  • Kahneman D, Klein G. Conditions for intuitive expertise: A failure to disagree. American Psychologist. 2009; 64 (6):515–526. [ PubMed : 19739881 ]
  • Kanwisher N, Yovel G. The fusiform face area: A cortical region specialized for the perception of faces. Philosophical Transactions of the Royal Society B: Biological Sciences. 2006; 361 (1476):2109–2128. [ PMC free article : PMC1857737 ] [ PubMed : 17118927 ]
  • Kassirer JP. Our stubborn quest for diagnostic certainty. A cause of excessive testing. New England Journal of Medicine. 1989; 320 (22):1489–1491. [ PubMed : 2497349 ]
  • Kassirer JP. Teaching clinical reasoning: Case-based and coached. Academic Medicine. 2010; 85 (7):1118–1124. [ PubMed : 20603909 ]
  • Kassirer JP. Imperatives, expediency, and the new diagnosis. Diagnosis. 2014; 1 (1):11–12. [ PubMed : 29539968 ]
  • Kassirer JP, Wong J, Kopelman R. Learning clinical reasoning. Baltimore: Williams & Wilkins; 2010.
  • Kent DM, Hayward RA. Limitations of applying summary results of clinical trials to individual patients: The need for risk stratification. JAMA. 2007; 298 (10):1209–1212. [ PubMed : 17848656 ]
  • Klein G. Sources of power: How people make decisions. Cambridge, MA: MIT Press; 1998.
  • Kleinman S, Busch MP, Hall L, Thomson R, Glynn S, Gallahan D, Ownby HE, Williams AE. False-positive HIV-1 test results in a low-risk screening setting of voluntary blood donation: Retrovirus Epidemiology Donor Study. JAMA. 1998; 280 (12):1080–1085. [ PubMed : 9757856 ]
  • Korf BR, Rehm HL. New approaches to molecular diagnosis. JAMA. 2013; 309 (14):1511–1521. [ PubMed : 23571590 ]
  • Kosecoff J, Chassin MR, Fink A, Flynn MF, McCloskey L, Genovese BJ, Oken C, Solomon DH, Brook RH. Obtaining clinical data on the appropriateness of medical care in community practice. JAMA. 1987; 258 (18):2538–2542. [ PubMed : 3312656 ]
  • Kostopoulou O, Rosen A, Round T, Wright E, Douiri A, Delaney B. Early diagnostic suggestions improve accuracy of GPs: A randomised controlled trial using computer-simulated patients. British Journal of General Practice. 2015; 65 (630):e49–e54. [ PMC free article : PMC4276007 ] [ PubMed : 25548316 ]
  • Krupinski EA, Berbaum KS, Caldwell RT, Schartz KM, Madsen MT, Kramer DJ. Do long radiology workdays affect nodule detection in dynamic CT interpretation? Journal of the American College of Radiology. 2012; 9 (3):191–198. [ PMC free article : PMC3296477 ] [ PubMed : 22386166 ]
  • Kugler J, Verghese A. The physical exam and other forms of fiction. Journal of General Internal Medicine. 2010; 25 (8):756–757. [ PMC free article : PMC2896585 ] [ PubMed : 20502975 ]
  • Laposata M. Coagulation disorders: Quality in laboratory diagnosis. New York: Demos Medical Publishing; 2010.
  • Laposata M, Dighe A. “Pre-pre” and “post-post” analytical error: High-incidence patient safety hazards involving the clinical laboratory. Clinical Chemistry and Laboratory Medicine. 2007; 45 (6):712–719. [ PubMed : 17579522 ]
  • Lee DW, Levy F. The sharp slowdown in growth of medical imaging: an early analysis suggests combination of policies was the cause. Health Affairs (Millwood). 2012; 31 (8):1876–1884. [ PubMed : 22842655 ]
  • Lenzer J, Hoffman JR, Furberg CD, Ioannidis JP. Ensuring the integrity of clinical practice guidelines: A tool for protecting patients. BMJ. 2013; 347 :f5535. on behalf of the Guideline Panel Review Working Group. [ PubMed : 24046286 ]
  • Lichstein PR. Clinical Methods: The History, Physical, and Laboratory Examinations, 3rd edition. Walker HK, Hall WD, Hurst JW, editors. Boston: Butterworths; 1990. (The medical interview). [ PubMed : 21250045 ]
  • Lin GA, Dudley RA, Lucas FL, Malenka DJ, Vittinghoff E, Redberg RF. Frequency of stress testing to document ischemia prior to elective percutaneous coronary intervention. JAMA. 2008; 300 (15):1765–1773. [ PubMed : 18854538 ]
  • Lipman RM, Deutsch TA. A yellow-green posterior limbal ring in a patient who does not have Wilson's disease. Archives of Ophthalmology. 1990; 108 (10):1385. [ PubMed : 2222268 ]
  • Lipshitz R, Klein G, Orasanu J, Salas E. Taking stock of naturalistic decision making. Journal of Behavioral Decision Making. 2001; 14 (5):331–352.
  • Loewenstein G, Lerner JS. Handbook of affective sciences. Davidson RJ, Scherer KR, Goldsmith HH, editors. New York: Oxford University Press; 2003. pp. 619–642. (The role of affect in decision making).
  • Lundberg GD. Acting on significant laboratory results. JAMA. 1981; 245 (17):1762–1763. [ PubMed : 7218491 ]
  • Marois R, Ivanoff J. Capacity limits of information processing in the brain. Trends in Cognitive Sciences. 2005; 9 (6):296–305. [ PubMed : 15925809 ]
  • McDonald CJ. Medical heuristics: the silent adjudicators of clinical practice. Annals of Internal Medicine. 1996; 124 (1 Pt 1):56–62. [ PubMed : 7503478 ]
  • McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A, Kerr EA. The quality of health care delivered to adults in the United States. New England Journal of Medicine. 2003; 348 (26):2635–2645. [ PubMed : 12826639 ]
  • McSherry D. Avoiding premature closure in sequential diagnosis. Artificial Intelligence in Medicine. 1997; 10 (3):269–283. [ PubMed : 9232189 ]
  • Medicaid.gov. ICD-10 Changes from ICD-9. 2015. [June 23, 2015]. www ​.medicaid.gov/Medicaid-CHIPProgram-Information ​/By-Topics/Data-and-Systems ​/ICD-Coding ​/ICD-10-Changesfrom-ICD-9.html .
  • Meyer F, Meyer TD. The misdiagnosis of bipolar disorder as a psychotic disorder: Some of its causes and their influence on therapy. Journal of Affective Disorders. 2009; 112 (1-3):174–183. [ PubMed : 18555536 ]
  • Miller GA. The magical number seven plus or minus two: Some limits on our capacity for processing information. Psychological Review. 1956; 63 (2):81–97. [ PubMed : 13310704 ]
  • Moroff SV, Pauker SG. What to do when the patient outlives the literature. Medical Decision Making. 1983; 3 (3):313–338. [ PubMed : 6645821 ]
  • Mouton CP, Bazaldua OV, Pierce B, Espino DV. Common infections in older adults. American Family Physician. 2001; 63 (2):257–268. [ PubMed : 11201692 ]
  • Mulley AG, Trimble C, Elwyn G. Stop the silent misdiagnosis: Patients' preferences matter. BMJ. 2012; 345 :e6572. [ PubMed : 23137819 ]
  • National Stroke Association. Act FAST. 2015. [May 14, 2015]. www ​.stroke.org/understand-stroke ​/recognizingstroke/act-fast .
  • Nelson C, Hojvat S, Johnson B, Petersen J, Schriefer M, Beard CB, Petersen L, Mead P. Concerns regarding a new culture method for Borrelia burgdorferi not approved for the diagnosis of Lyme disease. Morbidity and Mortality Weekly Report. 2014; 63 (15):333. [ PMC free article : PMC5779394 ] [ PubMed : 24739342 ]
  • Neufeld V, Norman G, Feightner J, Barrows H. Clinical problem-solving by medical students: A cross-sectional and longitudinal analysis. Medical Education. 1981; 15 (5):315–322. [ PubMed : 6973686 ]
  • NIA (National Institute on Aging). A clinician's handbook: Talking with your older patient. Bethesda, MD: National Institutes of Health; 2008.
  • Nichols JH, Rauch CA, editors. Clinical chemistry. New York: Demos Medical Publishing; 2013.
  • NIH (National Institutes of Health). Precision Medicine Initiative. 2015. [May 22, 2015]. www ​.nih.gov/precisionmedicine .
  • Norman GR. Research in clinical reasoning: Past history and current trends. Medical Education. 2005; 39 (4):418–427. [ PubMed : 15813765 ]
  • Norman GR, Eva KW. Diagnostic error and clinical reasoning. Medical Education. 2010; 44 (1):94–100. [ PubMed : 20078760 ]
  • Ostbye T, Yarnall KS, Krause KM, Pollak KI, Gradison M, Michener JL. Is there time for management of patients with chronic diseases in primary care? Annals of Family Medicine. 2005; 3 (3):209–214. [ PMC free article : PMC1466884 ] [ PubMed : 15928223 ]
  • PAF (Patient Advocate Foundation). Second opinions. 2012. [March 30, 2015]. www ​.patientadvocate.org/help ​.php/index.php?p=691 .
  • Papp KK, Huang GC, Lauzon Clabo LM, Delva D, Fischer M, Konopasek L, Schwartzstein RM, Gusic M. Milestones of critical thinking: A developmental model for medicine and nursing. Academic Medicine. 2014; 89 (5):715–720. [ PubMed : 24667504 ]
  • Parasuraman R, Sheridan TB, Wickens CD. A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man and Cybernetics—Part A: Systems and Humans. 2000; 30 (3):286–297. [ PubMed : 11760769 ]
  • Patel MR, Spertus JA, Brindis RG, Hendel RC, Douglas PS, Peterson ED, Wolk MJ, Allen JM, Raskin IE. ACCF proposed method for evaluating the appropriateness of cardiovascular imaging. Journal of the American College of Cardiology. 2005; 46 (8):1606–1613. [ PubMed : 16226195 ]
  • Pauker SG, Kassirer JP. Therapeutic decision making: A cost-benefit analysis. New England Journal of Medicine. 1975; 293 (5):229–234. [ PubMed : 1143303 ]
  • Pauker SG, Kassirer JP. The threshold approach to clinical decision making. New England Journal of Medicine. 1980; 302 (20):1109–1117. [ PubMed : 7366635 ]
  • Pelaccia T, Tardif J, Triby E, Charlin B. An analysis of clinical reasoning through a recent and comprehensive approach: The dual-process theory. Medical Education Online. 2011 March; 14 :16. [ PMC free article : PMC3060310 ] [ PubMed : 21430797 ]
  • Peterson ED, Roe MT, Mulgund J, DeLong ER, Lytle BL, Brindis RG, Smith SC Jr., Pollack CV Jr., Newby LK, Harrington RA, Gibler WB, Ohman EM. Association between hospital process performance and outcomes among patients with acute coronary syndromes. JAMA. 2006; 295 (16):1912–1920. [ PubMed : 16639050 ]
  • Pincus H. Diagnostic error: Issues in behavioral health. Washington, DC: 2014. (Presentation to the Committee on Diagnostic Error in Health Care, November 6, 2014).
  • Plebani M, Lippi G. Closing the brain-to-brain loop in laboratory testing. Clinical Chemistry and Laboratory Medicince. 2011; 49 (7):1131–1133. [ PubMed : 21663564 ]
  • Plebani M, Laposata M, Lundberg GD. The brain-to-brain loop concept for laboratory testing 40 years after its introduction. American Journal of Clinical Pathology. 2011; 136 (6):829–833. [ PubMed : 22095366 ]
  • Pope JH, Aufderheide TP, Ruthazer R, Woolard RH, Feldman JA, Beshansky JR, Griffith JL, Selker HP. Missed diagnoses of acute cardiac ischemia in the emergency department. New England Journal of Medicine. 2000; 342 (16):1163–1170. [ PubMed : 10770981 ]
  • Porter ME. What is value in health care? New England Journal of Medicine. 2010; 363 (26):2477–2481. [ PubMed : 21142528 ]
  • Pronovost PJ. Enhancing physicians' use of clinical guidelines. JAMA. 2013; 310 (23):2501–2502. [ PubMed : 24310916 ]
  • Qureshi AM, McDonald L, Primrose WR. Management of myocardial infarction in the very elderly—Impact of clinical effectiveness on practice. Scottish Medical Journal. 2000; 45 (6):180–182. [ PubMed : 11216310 ]
  • Reeves RR, Parker JD, Burke RS, Hart RH. Inappropriate psychiatric admission of elderly patients with unrecognized delirium. Southern Medical Journal. 2010; 103 (2):111–115. [ PubMed : 20065900 ]
  • Rich MW. Epidemiology, clinical features, and prognosis of acute myocardial infarction in the elderly. American Journal of Geriatric Cardiology. 2006; 15 (1):7–11. quiz 12. [ PubMed : 16415640 ]
  • Rosch E, Mervis CB. Family resemblances: Studies in the internal structure of categories. Cognitive Psychology. 1975; 7 (4):573–605.
  • Rosenberg CE. The tyranny of diagnosis: Specific entities and individual experience. Milbank Quarterly. 2002; 80 (2):237–260. [ PMC free article : PMC2690110 ] [ PubMed : 12101872 ]
  • RSNA (Radiological Society of North America). QI tools. 2015. [May 22, 2015]. www ​.rsna.org/QI_Tools.aspx .
  • SAMHSA (Substance Abuse and Mental Health Services Administration) and HRSA (Health Resources and Services Administration). Screening tools. 2015. [May 22, 2015]. www ​.integration.samhsa ​.gov/clinical-practice/screening-tools .
  • Sarter N. Use(r)-centered design of health IT: Challenges and lessons learned. Washington, DC: 2014. (Presentation to the Committee on Diagnostic Error in Health Care, August, 7, 2014).
  • Schiff GD, Leape LL. Commentary: How can we make diagnosis safer? Academic Medicine. 2012; 87 (2):135–138. [ PubMed : 22273611 ]
  • Schmidt HG, Norman GR, Boshuizen HPA. A cognitive perspective on medical expertise: Theory and implications. Academic Medicine. 1990; 65 :611–621. [ PubMed : 2261032 ]
  • Schulman KA, Berlin JA, Harless W, Kerner JF, Sistrunk S, Gersh BJ, Dube R, Taleghani CK, Burke JE, Williams S, Eisenberg JM, Escarce JJ. The effect of race and sex on physicians' recommendations for cardiac catheterization. New England Journal of Medicine. 1999; 340 (8):618–626. [ PubMed : 10029647 ]
  • Schwartz LH, Panicek DM, Berk AR, Li Y, Hricak H. Improving communication of diagnostic radiology findings through structured reporting. Radiology. 2011; 260 (1):174–181. [ PMC free article : PMC3121011 ] [ PubMed : 21518775 ]
  • Seshia SS, Makhinson M, Phillips DF, Young GB. Evidence-informed person-centered healthcare (part I): Do “cognitive biases plus” at organizational levels influence quality of evidence? Journal of Evaluation in Clinical Practice. 2014a; 20 (6):734–747. [ PubMed : 25429739 ]
  • Seshia SS, Makhinson M, Young GB. Evidence-informed person-centered health care (part II): Are “cognitive biases plus” underlying the EBM paradigm responsible for undermining the quality of evidence? Journal of Evaluation in Clinical Practice. 2014b; 20 (6):748–758. [ PubMed : 25494630 ]
  • Sharfstein J. FDA regulation of laboratory-developed diagnostic tests: Protect the public, advance the science. JAMA. 2015; 313 (7):667–668. [ PubMed : 25560381 ]
  • Sibbald M, de Bruin AB, van Merrienboer JJ. Checklists improve experts' diagnostic decisions. Medical Education. 2013; 47 (3):301–308. [ PubMed : 23398016 ]
  • Simel D, Rennie D. The rational clinical examination: Evidence-based clinical diagnosis. McGraw-Hill Professional; 2008.
  • Singer T, Verhaeghen P, Ghisletta P, Lindenberger U, Baltes PB. The fate of cognition in very old age: Six-year longitudinal findings in the Berlin Aging Study (BASE). Psychology and Aging. 2003; 18 (2):318–331. [ PubMed : 12825779 ]
  • Slovic P, Peters E. Risk perception and affect. Current Directions in Psychological Science. 2006; 15 (6):322–325.
  • Slovic P, Finucane ML, Peters E, MacGregor DG. Rational actors or rational fools: Implications of the affect heuristic for behavioral economics. Journal of SocioEconomics. 2002; 31 (4):329–342.
  • Slovic P, Finucane ML, Peters E, MacGregor DG. Risk as analysis and risk as feelings: Some thoughts about affect, reason, risk, and rationality. Risk Analysis. 2004; 24 (2):311–322. [ PubMed : 15078302 ]
  • Small SA. Age-related memory decline: Current concepts and future directions. Archives of Neurology. 2001; 58 (3):360–364. [ PubMed : 11255438 ]
  • Smith EE, Medin DL. Categories and concepts. Cambridge, MA: Harvard University Press; 1981.
  • Smith EE, Medin DL. Foundations of cognitive psychology: Core readings. Levitin DJ, editor. Cambridge, MA: Bradford; 2002. pp. 277–292. (The exemplar view).
  • Smith MB, Sainfort PC. A balance theory of job design for stress reduction. International Journal of Industrial Ergonomics. 1989; 4 (1):67–79.
  • Snow V, Mottur-Pilson C, Cooper RJ, Hoffman JR. Principles of appropriate antibiotic use for acute pharyngitis in adults. Annals of Internal Medicine. 2001; 134 (6):506–508. [ PubMed : 11255529 ]
  • Song Y, Skinner J, Bynum J, Sutherland J, Wennberg JE, Fisher ES. Regional variations in diagnostic practices. New England Journal of Medicine. 2010; 363 (1):45–53. [ PMC free article : PMC2924574 ] [ PubMed : 20463332 ]
  • Sox HC Jr. Probability theory in the use of diagnostic tests. An introduction to critical study of the literature. Annals of Internal Medicine. 1986; 104 (1):60–66. [ PubMed : 3079637 ]
  • Stanford Medicine 25 Team. Stanford medicine 25. 2015. [July 27, 2015]. http: ​//stanfordmedicine25 ​.stanford.edu/about .
  • Stanovich KE. Decision making and rationality in the modern world. New York: Oxford; 2009.
  • Stanovich KE, Toplak ME. Defining features versus incidental correlates of Type 1 and Type 2 processing. Mind & Society. 2012; 11 (1):3–13.
  • Stark M, Fins JJ. The ethical imperative to think about thinking—Diagnostics, metacognition, and medical professionalism. Cambridge Quarterly of Healthcare Ethics. 2014; 23 (4):386–396. [ PubMed : 25033249 ]
  • Stratton CW. Clinical microbiology: Quality in laboratory diagnosis. New York: Demos Medical Publishing; 2011.
  • Suringa DWR, Bank LJ, Ackerman AB. Role of measles virus in skin lesions and Koplik's spots. New England Journal of Medicine. 1970; 283 (21):1139–1142. [ PubMed : 5474350 ]
  • Timbie JW, Hussey PS, Burgette LF, Wenger NS, Rastegar A, Brantely I, Khodyakov D, Leuschner KJ, Weidmer BA, Kahn KL. Medicare imaging demonstration final evaluation: Report to Congress. Santa Monica, CA: RAND; 2014. [ PMC free article : PMC5158237 ] [ PubMed : 28083357 ]
  • Timmermans S, Mauck A. The promises and pitfalls of evidence-based medicine. Health Affairs (Millwood). 2005; 24 (1):18–28. [ PubMed : 15647212 ]
  • Tinetti ME, Bogardus ST Jr., Agostini JV. Potential pitfalls of disease-specific guidelines for patients with multiple conditions. New England Journal of Medicine. 2004; 351 (27):2870–2874. [ PubMed : 15625341 ]
  • Tombu MN, Asplund CL, Dux PE, Godwin D, Martin JW, Marois R. A unified attentional bottleneck in the human brain. Proceedings of the National Academies of Sciences of the United States of America. 2011; 108 (33):13426–13431. [ PMC free article : PMC3158154 ] [ PubMed : 21825137 ]
  • Tricoci P, Allen JM, Kramer JM, Califf RM, Smith SC Jr. Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA. 2009; 301 (8):831–841. [Erratum appears in JAMA, 2009, 301(15):1544] [ PubMed : 19244190 ]
  • Trikalinos TA, Siebert U, Lau J. Decision-analytic modeling to evaluate benefits and harms of medical tests: Uses and limitations. Medical Decision Making. 2009; 29 (5):E22–E29. [ PubMed : 19734441 ]
  • Verghese A. Treat the patient, not the CT scan. The New York Times. 2011 February 26; [August 5, 2015]; www ​.nytimes.com/2011 ​/02/27/opinion/27verghese.html .
  • Verghese A, Horwitz RI. In praise of the physical examination. BMJ. 2009; 339 :b5448. [ PubMed : 20015910 ]
  • Verghese A, Brady E, Kapur CC, Horwitz RI. The bedside evaluation: ritual and reason. Annals of Internal Medicine. 2011; 155 (8):550–553. [ PubMed : 22007047 ]
  • Vohs KD, Baumeister RF, Loewenstein G. Do emotions help or hurt decision making? A hedgefoxian perspective. New York: Russell Sage Foundation; 2007.
  • Welch HG. Less medicine more health: 7 assumptions that drive too much medical care. Boston, MA: Beacon Press; 2015.
  • WHO (World Health Organization). International classification of diseases (ICD). Geneva: World Health Organization; 2012.
  • Whiting P, Rutjes AWS, Reltsma JB, Glas AS, Bossuyt PMM, Kleljnen J. Sources of variation and bias in studies of diagnostic accuracy. Annals of Internal Medicine. 2004; 140 (3):189–202. [ PubMed : 14757617 ]
  • Yarnall KS, Pollak KI, Ostbye T, Krause KM, Michener JL. Primary care: Is there enough time for prevention? American Journal of Public Health. 2003; 93 (4):635–641. [ PMC free article : PMC1447803 ] [ PubMed : 12660210 ]
  • Zhi M, Ding EL, Theisen-Toupal J, Whelan J, Arnaout R. The landscape of inappropriate laboratory testing: A 15-year meta-analysis. PLoS ONE. 2013; 8 (11):e78962. [ PMC free article : PMC3829815 ] [ PubMed : 24260139 ]
  • Zwaan L, Singh H. The challenges in defining and measuring diagnostic error. Diagnosis. 2015; 2 (2):97–103. [ PMC free article : PMC4779119 ] [ PubMed : 26955512 ]
  • Zwaan L, Thijs A, Wagner C, van der Wal G, Timmermans DR. Design of a study on suboptimal cognitive acts in the diagnostic process, the effect on patient outcomes and the influence of workload, fatigue and experience of physician. BMC Health Services Research. 2009; 9 :65. [ PMC free article : PMC2680398 ] [ PubMed : 19383168 ]

In this report, the committee employs the terminology “the diagnostic process” to convey diagnosis as a process.

The committee uses the term “diagnostic testing” to be inclusive of all types of testing, including medical imaging, anatomic pathology, and laboratory medicine, as well as other types of testing, such as mental health assessments, vision and hearing testing, and neurocognitive testing.

Public Law 110-275 (July 15, 2008).

The term “system 1” is an oversimplification because it is unlikely there is a single cognitive or neural system responsible for all system 1 cognitive processes.

Inductive reasoning involves probabilistic reasoning (see the following section).

  • Cite this Page Committee on Diagnostic Error in Health Care; Board on Health Care Services; Institute of Medicine; The National Academies of Sciences, Engineering, and Medicine; Balogh EP, Miller BT, Ball JR, editors. Improving Diagnosis in Health Care. Washington (DC): National Academies Press (US); 2015 Dec 29. 2, The Diagnostic Process.
  • PDF version of this title (4.6M)

In this Page

Related information.

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Recent Activity

  • The Diagnostic Process - Improving Diagnosis in Health Care The Diagnostic Process - Improving Diagnosis in Health Care

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Diagnostic reasoning...

Diagnostic reasoning in cardiovascular medicine

  • Related content
  • Peer review
  • John E Brush Jr , senior medical director , professor of medicine 1 2 ,
  • Jonathan Sherbino , assistant dean , professor 3 4 ,
  • Geoffrey R Norman , professor emeritus and scientist 3
  • 1 Sentara Health Research Center, Norfolk, VA, USA
  • 2 Eastern Virginia Medical School, Norfolk, VA, USA
  • 3 McMaster Education Research, Innovation and Theory (MERIT) Program, McMaster University, Hamilton, ON, Canada
  • 4 Department of Medicine, McMaster University, Hamilton, ON, Canada
  • Correspondence to: J E Brush jebrush{at}sentara.com

Research in cognitive psychology shows that expert clinicians make a medical diagnosis through a two step process of hypothesis generation and hypothesis testing. Experts generate a list of possible diagnoses quickly and intuitively, drawing on previous experience. Experts remember specific examples of various disease categories as exemplars, which enables rapid access to diagnostic possibilities and gives them an intuitive sense of the base rates of various diagnoses. After generating diagnostic hypotheses, clinicians then test the hypotheses and subjectively estimate the probability of each diagnostic possibility by using a heuristic called anchoring and adjusting. Although both novices and experts use this two step diagnostic process, experts distinguish themselves as better diagnosticians through their ability to mobilize experiential knowledge in a manner that is content specific. Experience is clearly the best teacher, but some educational strategies have been shown to modestly improve diagnostic accuracy. Increased knowledge about the cognitive psychology of the diagnostic process and the pitfalls inherent in the process may inform clinical teachers and help learners and clinicians to improve the accuracy of diagnostic reasoning. This article reviews the literature on the cognitive psychology of diagnostic reasoning in the context of cardiovascular disease.

Introduction

An accurate and timely diagnosis is of paramount importance for patients with heart disease. A missed cardiac diagnosis can lead to harm to the patient, as well as dissatisfaction and life threatening ramifications. 1 2 3 Cardiac diseases are common, and cardiac diagnostic encounters are frequent, particularly in emergency departments where a missed cardiac diagnosis is a leading cause of malpractice litigation. 4 5 One cardiac diagnostic challenge is an ST elevation myocardial infarction (STEMI), for which a correct and timely diagnosis is critical for triggering emergency life saving interventions. 6 7 The door-to-balloon time for STEMI is a widely reported measure of a hospital’s performance that is critically dependent on reliable diagnostic competence. 8 Diagnostic accuracy and timing are critical for cardiovascular patients, yet the process of diagnostic reasoning is underemphasized in cardiology training and continuing medical education, as it is in many specialty areas. 9

Insufficient attention has been paid to the important area of diagnostic reasoning, which has been eclipsed in the cardiology literature by reports of large clinical trials and novel technological advances. Also, very few studies have examined how cardiovascular diagnostic strategies affect decisions about patients’ management and outcomes, and costs, even though cardiovascular diagnostic tests and imaging are frequently used, have substantial impact, and are exceedingly costly. 10 11

Medical error was brought to public attention in the United States and elsewhere more than two decades ago by the Institutes of Medicine’s publication To Err Is Human . 12 A more recent Institutes of Medicine publication, Improving Diagnosis in Health Care , brought attention to the problem of diagnostic error with calls for better education about the diagnostic process, better measurement of diagnostic error, and more research and emphasis on diagnostic competency. 13

Hiding in plain view has been the abundant literature in cognitive science, which has yielded important evidence on how clinical experts make a diagnosis. By “expert,” we mean a clinician who has completed specialty training and is in practice, and so can be assumed to have the necessary knowledge and experience in their specialty; a “novice” is a learner in the early stages of training who has not yet acquired the necessary knowledge and experience for independent practice. Much of this literature has been published in education and psychology journals and may have escaped the attention of practicing cardiologists and clinical teachers. This review summarizes the evidence base about diagnostic reasoning in the context of cardiovascular disease, with the intention that increased awareness among clinicians and teachers will improve the quality of cardiovascular diagnostic reasoning.

Sources and selection criteria

This narrative review synthesizes the diagnostic reasoning literature from medicine and applies this evidence base to the domain of cardiovascular disease. Building on our knowledge of this literature, we did a broad, systematic search using the term “diagnostic reasoning” in Google Scholar and PubMed in all date ranges. We supplemented this search strategy by a hand search of the references of key articles. We achieved inclusion of identified articles by using an informal consensus approach based on an assessment of a study’s impact and its methods, with preference given to experimental studies. We achieved organization and framing of key themes that emerged during the synthesis in an iterative fashion, by consensus. We gave precedence to references that support our shared view that the effectiveness of educational strategies and corrective measures for diagnostic reasoning should be subjected to formal evaluation.

Incidence of diagnostic error

The incidence of diagnostic errors can be estimated from several sources, including autopsy studies, surveys of patients, audits of diagnostic testing, and reviews of closed malpractice claims. 14 One retrospective analysis of internal medicine cases estimated that the rate of diagnostic error was very high, possibly in the 10-15% range. 15 This and other reports have noted that estimating rates of diagnostic error on the basis of retrospective review has limitations owing to detection and reporting biases. 16 Another observational study identified missed diagnoses of acute myocardial infarction (AMI) by counting the number of patients who returned to an emergency department, which would likely undercount the number of missed diagnoses. 3 Another limitation is that, over time, diseases can progress and diagnostic evidence can accumulate, making a diagnosis more apparent, which in hindsight can make an initially missed diagnosis seem to be a diagnostic error. A further limitation is in the calculation of diagnostic error rates, which requires counting the number of diagnostic encounters in the denominator as well as the number of misses in the numerator. Defining a representative sample of diagnostic encounters has been notoriously difficult, making calculating diagnostic miss rates difficult. 3 For example, among patients in an emergency department, defining a representative sample of patients with a possible AMI to calculate a diagnostic miss rate for AMI is difficult. Notwithstanding the difficulties in measurement, diagnostic error remains a substantial problem, and tackling this problem through greater awareness is urgently needed. 13

Diagnostic reasoning versus management reasoning

Clinical reasoning integrates information on patients and clinical information, medical knowledge, and situational factors to provide care for patients. Clinical reasoning is an umbrella term that includes both diagnostic reasoning and management reasoning. Diagnostic reasoning is a classification task with various levels of specificity (for example, AMI versus STEMI versus STEMI with complete occlusion of the first obtuse marginal artery). Diagnostic reasoning has an objective endpoint, although the gold standard for a diagnosis can have problems of reliability (for example, cardiologists’ determination of the presence of congestive heart failure in clinical trials). Management reasoning involves prioritization of tasks, shared decision making with a patient and family, and dynamic monitoring of response to treatment. Management reasoning is more subjective, reflecting context, available resources, and the patient’s choice. 17 18 For the purposes of this review, we focus on diagnostic reasoning, acknowledging that this is only a portion of the task of clinical reasoning.

The two step process of making a diagnosis

Cardiac disease is diverse, with a broad range of presenting signs and symptoms, creating diagnostic challenges for the clinician. The diagnosis is often obscured at the time of initial presentation by vague, poorly characterized symptoms such as chest pain, shortness of breath, or fluttering in the chest. The clinician must elucidate the patient’s symptoms and translate the patient’s own words into the lexicon of cardiology. The clinician then connects signs and symptoms in a recognizable narrative or pattern and works inductively to place the patient’s illness in the correct diagnostic category. The process depends on one’s ability to engage the patient and elicit a clear and complete history. The importance of the history for making a correct cardiac diagnosis is well summarized by a well known quote attributed to Sir William Osler, “Listen to the patient; he will tell you the diagnosis.”

The physical examination can be helpful if it reveals an obvious sign such as a murmur or a friction rub, but often the examination is inconclusive and further diagnostic testing is needed. An organized and selective diagnostic testing strategy is key for proceeding in an effective and efficient manner. Despite skill and experience, the clinician often remains indecisive about a cardiac diagnosis, as evidenced by the fact that about a third of patients labeled with the discharge diagnosis of congestive heart failure were also initially treated for a pulmonary diagnosis. 19

Clinicians tackle clinical ambiguity and diagnostic uncertainty by using intuition and analytical reasoning. Research from the 1970s showed that the diagnostic process is composed of two parts: hypothesis generation and hypothesis testing, the so-called “hypothetico-deductive method.” 20 21 22

Hypothesis generation

Researchers have found that expert diagnosticians have between three and five diagnoses in mind within seconds to minutes after starting a diagnostic encounter. 20 21 22 23 Generating the hypothesis early in the encounter is important for the accuracy of the eventual diagnosis. In one observational study, if the clinician had the diagnosis in mind early on, the diagnostic accuracy was 95%; if not, the diagnostic accuracy fell to 25%. 24 Another observational study of the diagnostic process showed that primary care physicians can make an accurate diagnosis on the basis of just the chief complaint in 79% of cases. 22 A qualitative study showed that emergency physicians generated 25% of their diagnostic hypotheses before seeing the patient, and they generated 75% of their hypotheses within five minutes of starting the diagnostic encounter. 25 The remarkable ability of experts to recognize diagnostic possibilities was shown to be an effortless and instantaneous use of intuition, or non-analytical reasoning. 26 27 28 The cognitive psychologist Herbert Simon described how experts use intuition by stating, “The situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer. Intuition is nothing more and nothing less than recognition.” 29

Early work showed that the two step diagnostic process was not necessarily restricted to expert diagnosticians but was observed in novice medical students as well. 20 21 The distinguishing feature of the master diagnostician was not possession of a generalizable diagnostic skill, but rather it was the expert’s ability to mobilize and use knowledge from past experience. Moreover, Elstein found that expert diagnostic ability was “content specific.” For both novices and experts, good performance on one case did not translate to good performance on another case of different content. Thus, the accuracy of the diagnostic process was dependent on the clinician’s experiential knowledge. Cardiologists and other specialists seem to know this wholeheartedly. They work within their content area and are quick to seek consultation with others when the diagnosis seems to fall outside of their content expertise.

The expert’s ability to mobilize and use the appropriate knowledge for an accurate diagnosis has led researchers to contemplate the possible knowledge structures that the expert might use to store experiential knowledge. 30 Several possible structures have been proposed, including propositional networks, prototypes, semantic axes, and exemplars. 26 31 32 33 Research suggests that clinicians are likely flexible in how they encode, access, and mobilize knowledge for various diagnostic encounters. 30

Studies have shown that the mechanism of diagnostic hypothesis generation varies, depending on a clinician’s amount of experience and level of training. 34 35 Students lack clinical experience and rely on biomedical knowledge to make causal connections to formulate diagnostic hypotheses. 36 37 Observing a student evaluating a patient with chest pain can show how this early method can be slow and relatively ineffective. As trainees gain experience, formal knowledge of basic pathophysiology is combined with expanding clinical experience as their diagnostic competence matures.

Illness scripts

With clinical experience, a student’s knowledge of disease is expanded to include signs, symptoms, and other clinical features that are observed in actual patients. The learner’s biomedical knowledge and growing clinical experience are reorganized and “encapsulated” into narrative structures that are referred to as illness scripts. 35 38 39 40 The term “script” implies a series of events that, along with enabling conditions, define a knowledge structure for remembering a diagnosis. The presentation of a typical patient with chest pain provides an excellent example of an illness script. Imagine a patient with a family history of coronary artery disease and a smoking history, who presents with a three week history of progressive chest pressure in the mid chest that occurs with exertion and resolves after a few minutes of rest. The enabling conditions, or risk factors, and the classic sequence of events are combined with a basic understanding of coronary artery disease to become encapsulated into memory as an illness script of unstable angina. With this mental representation, a clinician might envision an actual patient combined with a mental image of plaque rupture and myocardial ischemia. An illness script can sometimes incorporate a prototypical patient with typical features, or in other cases it can incorporate a specific patient with particular features. With time, learners accumulate a repertoire of illness scripts that is idiosyncratic to each learner on the basis of his or her individual experience. 35

With further exposure to actual cases, clinicians gain experiential knowledge by remembering individual diagnostic instances in episodic memory. When an instance or object is labeled, categorized, and placed in long term memory, it is remembered using a knowledge structure called an exemplar. 40 41 Rather than abstracted knowledge, exemplars are direct memories of specific patients with unique features. Exemplars may remain as distinct memories or may become less distinct and more generalized over time. Each clinician has an idiosyncratic patient experience, and the exemplars that are remembered are the result of a clinician’s unique experiences and are not generalizable to other clinicians.

An exemplar can be retrieved from memory effortlessly and unconsciously. The experiential knowledge base of an experienced clinician is analogous to a large file cabinet filled with many exemplars, filed according to diagnostic category. The range and variety of exemplars gives the clinician a sense of the diversity within a diagnostic category and the distinguishing features between diagnostic categories. Clinicians know that exemplars of anterior and posterior myocardial infarctions are within the same category and that an acute aortic dissection belongs to a different category, on the basis of the distinct features of the exemplars. With experience, they are able to quickly recognize the contrasting features of different diagnostic categories, similar to recognizing the contrasting appearance of a right bundle branch block and a left bundle branch block on an electrocardiogram. 42 In addition, expert diagnosticians develop an intuitive sense of the prevalence of a diagnosis based on the number of encounters that are stored in long term memory as exemplars. 43 Expert clinicians know intuitively that an acute myocardial infarction is more common than an aortic dissection on the basis of an implicit sense of the relative number of exemplars in each category stored in long term memory.

Symptom phenotypes

Because exemplars are mental representations of a range of specific clinical symptom constellations (that is, phenotypes), a recent study examined the range of symptom phenotypes in a registry of young patients presenting with AMI. 44 This registry offered an opportunity to study symptom phenotypes of AMI because it prospectively and systematically recorded detailed information about patients’ presenting symptoms. Among 3501 patients with AMI, 488 unique symptom phenotypes were identified, showing the degree of variation that challenges diagnosticians. Significantly more symptom phenotypes occurred in women than in men, which might be a source of ambiguity that could help to explain why the diagnosis of AMI is missed more frequently in women than in men. 3 At a population level, the most common symptoms were chest pain, radiation, shortness of breath, and diaphoresis, and these symptoms generally describe the prototypical AMI patient. Interestingly, the phenotype with the prototypical combination of symptoms represented only 1% of the patients in this multicenter cohort. Cognitive psychology studies have shown that many examples are critically important for learning, 45 46 and this study suggests that learners need extensive experience to acquire adequate exposure to various phenotypes to permit the generation of a rich library of exemplars of AMI.

Abductive reasoning

When the patient’s presentation is ambiguous and a diagnostic possibility does not intuitively and immediately come to mind, clinicians revert to more reflective reasoning methods. 47 For this, experienced clinicians make use of a thought process called abductive reasoning, or reasoning toward the most plausible hypothesis. 48 Abductive reasoning takes the following course: “The surprising fact C is observed. But if A were true, C would be a matter of course. Hence, there is reason to suspect that A is true.” 49 For example, “A patient with a positive troponin is observed. If an AMI is present, troponin would be elevated as a matter of course. Hence, there is reason to suspect an AMI in this patient.” Abductive reasoning is a way to work backward and generate hypotheses that might explain observations. It is a form of reasoning that yields hypotheses, not conclusions. It is also the thinking that is used to generate a differential diagnosis by asking, “What other diagnoses could cause the observed findings?” Abductive reasoning describes a method for hypothesis generation that experienced clinicians use when reversion to more reflective thinking is needed.

Hypothesis testing

After rapidly generating several diagnostic possibilities, the clinician begins testing various possibilities, starting with the most likely ones. The decision about which diagnosis to test depends on probability, as well as on the severity and acuity of a potential diagnosis. For example, a lower probability threshold and more urgent testing strategy might be applied for severe life threatening diagnoses such as AMI, pulmonary embolus, or aortic dissection. The prudent strategy might prioritize diagnostic testing for life threatening diseases, but for the most part the order of testing and diagnostic reasoning becomes an exercise in probability.

Bayesian reasoning

The field of cardiology has led the way in thinking probabilistically about possible diagnoses. Clinicians have been shown to use bayesian reasoning to determine the conditional probability of coronary artery disease. 50 Bayesian reasoning provides a method for updating a baseline probability estimate on the basis of the strength of new information. Of course, clinicians rarely, if ever, do formal calculations, but this subjective notion of probability helps the clinician to think about an individual and to use probability estimates to zero in on the correct diagnosis.

No test is perfect, and the strength of new evidence from cardiac testing depends on the degree of imperfection of the test, as measured by the test’s operating characteristics. Sensitivity, or the true positive rate, measures the number of patients with disease who test positive; specificity, or the true negative rate, measures the number of patients without disease who test negative. Likelihood ratios can be calculated by combining sensitivity and specificity into dimensionless numbers that give an intuitive estimate of the strength of a positive or negative test result. 9 51

This approach has limitations too. Measuring sensitivity and specificity requires systematical testing of patients in a research setting, and the numbers, as for any clinical measurements, are relatively imprecise estimates. Furthermore, when the operating characteristics of a test are determined in a rigorously controlled setting and then used in a practice setting or for screening of individuals with a different spectrum of disease, the test’s capability can be adversely affected by spectrum bias. 52 For example, a study of the current generation troponin T assay in a rigorous research setting showed that the test had a sensitivity of 95% and specificity of 80%. 53 This research study, however, specifically excluded patients with renal failure or septic shock. When used clinically in an emergency department setting without rigorous restrictions, the false positive rate would likely increase, which would markedly decrease the specificity. 54 Thus, tests should be ordered deliberately to avoid spectrum bias, so as to maximize the operating characteristics of the test and minimize false positive results.

Over time, the prevalence of coronary artery disease, like other cardiac diagnoses, and associated risk factors and local environmental factors, has changed, requiring an updating of the probability estimates. 55 56 Nevertheless, approximating pre-test probability remains an important step in the process of calibrating one’s estimate of diagnostic probability.

Anchoring and adjusting

Cognitive psychologists have long recognized that people are not necessarily rational or mathematically rigorous in their decision making. 57 58 59 People use learned rapid mental shortcuts called heuristics to enable rapid decisions to be made under conditions of uncertainty. Some authors have asserted that heuristics represent a speed-accuracy trade-off, in which the speed of heuristics leads to bias and error. 60 Others have argued that heuristics, although sometimes associated with error, are very useful for rapid decision making. 59 61 One heuristic is anchoring and adjusting, 60 which describes how a decision maker quickly and subjectively estimates (anchors) the baseline probability of an event and then adjusts the probability estimate on the basis of new information. Anchoring and adjusting is an informal shortcut that replaces the formal use of Bayes’ rule. It is an intuitive two step process for estimating probability that works in parallel with the two step process of hypothesis generation and hypothesis testing. This heuristic can be affected by two potential biases. One bias is anchoring, whereby the decision maker becomes too stuck on the initial base rate and does not adequately adjust after new evidence is received. The other bias is base rate neglect, whereby the decision maker jumps to a subsequent probability estimate on the basis of new evidence without adequate regard for the initial base rate or disease prevalence.

Our recent experimental study evaluated the effectiveness of teaching the concept of bayesian reasoning to improve the accuracy of diagnostic probability estimates. 62 Students were randomized to receive an instructional video on anchoring and adjusting, likelihood ratios, and bayesian reasoning, versus exposure to repeated examples of cases with feedback, versus no intervention. Previous studies suggested that trainees’ subjective probability estimates are often highly inaccurate. 63 64 This study, however, showed that all study participants gave probability estimates that were unexpectedly better than predicted by previous studies. The students who received the conceptual instruction on bayesian concepts showed a modest advantage in estimating the post-test probability of disease, suggesting that even brief instruction on bayesian concepts might improve students’ use of statistical heuristics to estimate probability.

Figure 1 shows how a clinician would use the anchoring and adjusting heuristic. The clinician estimates (anchors) a pre-test probability of a diagnosis on the x axis based on knowledge of the base rate or prevalence of a disease. The clinician could draw a vertical line to the curves for either a positive or a negative test result and then a horizontal line to the y axis to determine an estimate of post-test probability. The degree of the adjustment, or the shift in the probability estimate, depends on the strength of a postitive or negative test result, which can be quantified using positive or negative likelihood ratios.

Fig 1

This graph visually represents how a clinician could use the anchoring and adjusting heuristic. A pre-test probability estimate of about 50% (the anchor) is chosen on the x axis. A vertical line is drawn to the curve for either a positive or a negative test result, and then a horizontal line is drawn to the y axis to determine the post-test probability. The shift shows the degree of the adjustment of the estimate, which depends on the strength of either a positive or a negative test result

  • Download figure
  • Open in new tab
  • Download powerpoint

Figure 2 shows how the adjustment or shift in the probability estimate can be asymmetric for different tests. The panel on the left shows a test that is highly specific but not very sensitive, such as a chest radiograph for the diagnosis of congestive heart failure. A positive test result would result in a larger shift in the post-test probability estimate. The panel on the right shows a test that is highly sensitive but not very specific, such as a D-dimer for the diagnosis of pulmonary embolus. A negative test would result in a larger negative shift in the post-test probability estimate. 9

Fig 2

This graph visually represents how the shift in probability can be asymmetric for different tests. The panel on the left shows a test that is highly specific but not very sensitive, whereby a positive test would result in a larger shift in the post-test probability estimate. The panel on the right shows a test that is highly sensitive but not very specific, whereby a negative test would result in a larger shift in the post-test probability estimate

Dual process theory: system 1 and system 2

The two step diagnostic process is compatible with dual process theory. According to this cognitive psychology theory, two definable systems for thinking exist: one is non-analytical or intuitive thinking, and the other is analytical thinking. Neuroscience studies using functional magnetic resonance imaging have shown that these two thinking patterns involve distinctly different areas of the brain and have different metabolic requirements. 65 66 Dual process theory draws a distinction between system 1 thinking, which is intuitive, automatic, quick, and effortless, and system 2 thinking, which is analytic, reflective, slow, and effortful. 58 67 68 System 1 thinking is analogous to driving a car down a familiar, empty highway in that it works unconsciously and effortlessly. System 2 thinking is more like parking a car in a tight parking space, which requires deliberate and effortful attention.

System 1 thinking is triggered by an association between new information and a similar example, or exemplar, stored in long term memory. 69 The association is effortless and depends on the strength of the association, which can be influenced by factors such as the number of examples in memory, the number of common features, and the recency or vividness of the memory. 70 System 2, on the other hand, uses computation, analysis, and logical rules, and places a heavy burden on working memory. 69 Expert clinicians make use of system 1 and system 2 thinking interchangeably, depending on the diagnostic task at hand.

Some authors have promoted the idea that errors occur because of short cuts, or heuristics, which are used by system 1 thinking and not corrected by system 2 reasoning. 58 They provide the following advice: “The way to block errors that originate in System 1 is simple in principle: recognize the signs that you are in a cognitive minefield and slow down, and ask for reinforcement from System 2.” 58 Other authors have countered this advice by stating that, “Perhaps the most persistent fallacy in the perception of dual-process theories is the idea that Type 1 processes (intuitive, heuristic) are responsible for all bad thinking and that Type 2 processes (reflective, analytic) necessarily lead to correct responses… So ingrained is this good-bad thinking idea that the same dual process theories have built it into their core terminology.” 67

Does the speed of system 1 thinking lead to diagnostic error? An experimental study showed that a correct diagnosis was actually associated with less time spent on the diagnostic task. 71 In other experimental studies in which investigators cautioned participants about speed and errors and encouraged participants to be deliberate and thorough, these instructions had no effect on diagnostic accuracy. 72 73 74 In another study, participants were allowed to re-think and revise their initial diagnosis, but revisions were more likely to be incorrect. 75 One study, however, showed that extreme time pressure can have a negative effect on diagnostic accuracy, possibly by inducing anxiety in participants. 76 Most of the evidence suggests that relying less on system 1 and more on system 2, as Kahneman and others advise, does not increase diagnostic accuracy. 77 Kahneman did not study experts, and the context specific knowledge of expert diagnosticians seems to make system 1 thinking more of a strength than a weakness.

Cognitive biases

Some authors have associated diagnostic error with many cognitive biases. 78 79 80 81 82 83 Surprisingly few studies, however, have empirically examined the role of cognitive biases in diagnostic error. One systematic review of cognitive bias in healthcare indicated that for diagnostic reasoning, only seven biases have actually been empirically evaluated. 84

The effect of bias on diagnosis can be studied either in an artificial experimental setting or in a practice setting where diagnostic errors have been identified and reviewed retrospectively. One experimental study showed evidence of “satisfaction of search” bias among radiologists shown radiographs with multiple artificial lung nodules. Participants often stopped searching after identifying only one nodule. 85 Another experimental study examined base rate neglect and found little evidence that this bias affects experts. 43 Additionally, these investigators found that the degree of self-reported experience with a diagnosis correlated well with an expert’s intuitive estimate of its base rate. Other experimental studies examined availability bias, which refers to how recent exposure to a case can bias the assessment of a new case, and these studies showed mixed results. 86 87 88 Some experimental studies showed that availability actually enhanced diagnostic accuracy. 89 90 For practitioners, this makes sense. The active engagement of practice with repeated exposures to a range of diagnostic encounters tends to build confidence among practitioners, supporting the notion that availability may improve diagnostic accuracy in the setting of real world practice.

Other observational studies have examined the role of bias through retrospective reviews of diagnostic errors in actual practice. One study examined 100 cases of diagnostic error in the emergency department and found that 68% were associated with a cognitive bias, primarily premature closure. 15 Another observational study, however, found no role of bias in reported cases of diagnostic error. 1 One prospective study showed that diagnostic experts were unable to agree on which bias actually contributed to the diagnostic error, and the study also found that the study participants were themselves affected by hindsight bias. 91

Strategies for experienced clinicians to avoid diagnostic error

Rather than attributing diagnostic error to biases or flawed cognitive processes, some authors have argued that diagnostic error is more commonly due to an inability to adequately mobilize necessary knowledge, or due to knowledge deficits. 1 77 Experimental studies have shown that the ability to mobilize knowledge and avoid diagnostic errors improves with experience. 86 92 93 Experimental studies have also shown that improved diagnostic accuracy is associated with the acquisition of both formal knowledge and experiential knowledge. 43 71 92

Simply imploring clinicians to routinely slow down and carefully monitor their intuitive thinking does not seem to be effective for improving diagnostic accuracy. Rapid and intuitive recognition of patterns is an important part of the diagnostic process, particularly in cardiology, and constraining this activity does not seem to be a good strategy. The diagnostic process, however, does allow the opportunity to reflect on the particular features of the diagnostic encounter. Clinicians often ask, “What am I missing? What else could this be?” For tough cases, consciously acknowledging the difficulty of a diagnostic challenge increased accuracy. 94 95 Experienced surgeons seem to know when to slow down at critical moments. 96

Mobilizing knowledge through deliberate reflection is a promising technique for improving diagnostic accuracy. 97 98 99 100 101 Reflecting on the concordant and discordant features between the patient and the various diagnostic hypotheses offers an opportunity for mid-course correction. Deliberate reflection enables clinicians to overcome distracting and misleading features of a case but requires that the clinician have adequate experience and sufficient clinical knowledge about the diversity of diagnostic features. This has been demonstrated elegantly in a more recent experimental study by these authors. 102 They showed that physicians can be immunized against availability bias through an intervention that increased their knowledge about the features that can discriminate between similar looking diseases. Of course, the strategy of deliberate reflection also requires recognition that an initial diagnostic impression may be unsettled and needs further reflection. Experimental studies have shown that clinicians’ diagnostic confidence correlates fairly well with diagnostic accuracy and that the correlation improves with experience, but overconfidence is likely a persistent problem and a potential impediment to the optimal use of deliberate reflection. 94 103

Checklists have been promoted as tools for improving diagnostic accuracy. 104 Checklists can be classified as either content specific tools that trigger the retrieval of relevant disease specific knowledge or process focused tools that guide adherence to optimal thinking. 105 Unfortunately, studies of the effectiveness of checklists have been disappointing. 105 Checklists focusing on content tend to show some promise in a few limited experimental studies, 106 107 but they tend to be more effective for junior clinicians and for more difficult cases. 105 Whether experimental studies of checklists are generalizable to the practice setting and whether checklists would be used consistently in the real world setting remain open questions.

Computerized decision support programs have also been promoted for improving diagnostic accuracy but have fallen short of expectations. 108 109 A review of a limited number of studies suggested some potential benefit for junior clinicians, but uptake of this technology in practice has been very limited. 110 In part, this may be a consequence of logistical difficulties with the software platform. A recent study of an electronic differential diagnostic support tool showed that computerized decision support can increase the number of diagnostic hypotheses and the probability that the correct diagnosis would be considered, and the impact was greater for novice clinicians. 136

Cognitive forcing strategies for reducing diagnostic error have been studied. 78 81 Three experimental studies assessed this approach to teach participants to recognize specific cognitive biases and apply such strategies. 111 112 113 These studies showed that cognitive forcing strategies had no effect on diagnostic errors or accuracy. Humans are not capable of consciously recognizing unconscious biases, 114 115 and that teaching this process would not be successful in reducing errors seems predictable. The task is made more difficult by the fact that more than 100 cognitive biases have been described in the general literature and at least 38 have been described in the medical literature. 116 Moreover, as noted earlier, even experts have trouble consistently and correctly identifying specific cognitive biases. 91

Notwithstanding difficulties with human introspection, flawed thinking undoubtedly affects diagnostic reasoning. Psychologists have described three general types of fallacies: hasty judgments, biased judgments, and distorted probability estimates. 58 General knowledge of these broad categories might help practitioners to develop habits that help them to avoid bias.

A particular concern is implicit bias regarding race, gender, sexuality, and ability, among other factors. The word bias in this context refers to stereotyping, which is “the process by which people use social categories (for example, race, sex) in acquiring, processing, and recalling information about others.” 117 A classic experimental study more than two decades ago showed how race and sex can affect the diagnostic evaluation of chest pain and referral patterns for diagnostic cardiac catheterization. 118 Clearly, implicit bias affects diagnostic reasoning, and overcoming this concern should be central to medical education and professional development. Suggested educational strategies for overcoming implicit bias include emphasizing fairness and egalitarian goals, encouraging identification with common identities with patients, counter-stereotyping, and trying to understand the patient’s perspective. 119 More research is needed to identify the most successful educational and support strategies for reducing the adverse effects of implicit bias on the quality of diagnostic reasoning.

Educational strategies to improve diagnostic accuracy

Teaching diagnostic competence in cardiology starts with instruction in the basics of history taking, physical diagnosis, interpretation of electrocardiograms, and a variety of other basic diagnostic skills that are prerequisites for making a cardiovascular diagnosis. 120 121 122 Professional societies have formulated expanded lists of competencies that are needed for interpretation of cardiovascular diagnostic tests, 123 and these competencies are required for board certification and practice. 124

True expertise in cardiovascular diagnosis, however, resides in an ability that is learnt through experience and years of deliberate practice and reflection. 125 126 The diagnostic expert uses experiential knowledge gained in the context of training in clinical rotations and specialized practice. Experiential knowledge has been described as “a constantly evolving, dynamic resource, and expertise resides in the ability and willingness not only to use and build, but also to purposefully adapt and re-engineer knowledge effectively.” 125

Several educators and investigators have recommended a variety of educational strategies for promoting diagnostic excellence. 13 115 125 126 127 128 129 The original research in the cognitive science of medical diagnosis was started by educators who were searching for the best way to turn novices into experts. 18 19 20 Clearly, knowledge, both formal and experiential, is a critical determinant for accurate diagnostic reasoning. 130 That better integration of basic science instruction with clinical experience is a successful strategy for improving diagnostic reasoning is therefore not surprising. Vertical integration of the basic sciences with clinical experience can create cognitive conceptual coherence that seems to improve diagnostic reasoning. 131 132 This strategy may facilitate the formation of illness scripts and make knowledge more accessible at the time of a diagnostic encounter. This integrative strategy may also make basic science education more compelling and memorable because it is linked to relevant clinical context. Contextualizing basic science to specific clinical situations does not necessarily transfer from one content area to another, which may explain why diagnostic expertise is content specific. Most clinical teaching occurs at the bedside, and clinical teachers of session level education need to be aware of the importance of integrating basic science knowledge with experience. Other educational strategies have been described, but measuring educational outcomes is difficult, and relatively few studies have formally evaluated the effect of educational strategies on learning outcomes or have compared recipients of educational interventions with control groups. 130 131 132

Societies, professional meetings, and journals have been established to promote educational strategies and interventions with the aim of achieving diagnostic excellence. 133 134 135 The idea is that greater awareness of the diagnostic process and attention to the sources of diagnostic error could help clinicians to make the most of their experience, purposefully seek feedback, and be more intentional about avoiding diagnostic error. Physicians seek causal explanations and mechanistic concepts, but teaching abstract cognitive psychology concepts out of context may not be effective. Interleaving these cognitive psychology concepts into content specific continuing medical education could be effective, but strategies to improve diagnostic reasoning through continuing medical education need further study.

This narrative review synthesizes the accumulated research into how experts make a diagnosis and considers the implications for learners, clinicians, and teachers. Application of the cognitive science of diagnostic reasoning should help learners to make the most of their clinical experiences and improve the effectiveness of clinicians and teachers. Effective educational strategies are those that focus on the acquisition and mobilization of knowledge, both experiential and formal. Other educational strategies and interventions are promising but need formal study and careful evaluation. Future research should continue to focus on diagnostic reasoning in the context of cardiovascular medicine and other subspecialties, with the goal of improving the accuracy and reliability of the diagnostic process and the quality of care for our patients.

Research questions

How can researchers design and evaluate real world interventions to assess the effect of implicit bias on the diagnostic process?

How can teaching of diagnostic reasoning be effectively embedded into subspecialty teaching rotations?

How can artificial intelligence and computerized decision support tools effectively improve the diagnostic process?

Patient involvement

One of the authors reached out to a patient who was thrilled that we were working on this narrative review because his life was dramatically affected by the diagnostic process. The patient and his wife read the manuscript and offered useful comments and encouragement.

The patient was referred to one of the authors with an undiagnosed and severe superior vena cava syndrome. The patient and his wife (a nurse) were very frustrated at the time of presentation because he had been extensively evaluated and the cause of his problem remained undiagnosed. A careful history detailed the enabling conditions and the time sequence of his illness, which led to an explanatory diagnostic hypothesis. This hypothesis led to the confirmatory diagnostic testing and effective treatment. This patient is an example of Osler’s adage to “Listen to the patient; he will tell you the diagnosis.” This patient remains an inspiration and a reminder of how an accurate medical diagnosis can have an enormous impact on a patient’s life.

Series explanation: State of the Art Reviews are commissioned on the basis of their relevance to academics and specialists in the US and internationally. For this reason they are written predominantly by US authors

Contributors: All of the authors contributed to all aspects of the preparation of this review. JEB is the guarantor.

Competing interests: We have read and understood the BMJ policy on declaration of interests and declare the following interests: JEB receives royalties from Dementi Milestone Publishing for the book The Science of the Art of Medicine: A Guide to Medical Reasoning .

Provenance and peer review: Commissioned; externally peer reviewed.

  • van der Wal G ,
  • Timmermans DR
  • Barrett M ,
  • Newman-Toker DE
  • Aufderheide TP ,
  • Ruthazer R ,
  • Schull MJ ,
  • Vermeulen MJ ,
  • Studdert DM ,
  • Gawande AA ,
  • Nallamothu BK ,
  • Normand SL ,
  • Bradley EH ,
  • Krumholz HM ,
  • Anderson JL ,
  • Bachelder BL ,
  • American College of Cardiology/American Heart Association Task Force on Performance Measures ,
  • American Academy of Family Physicians ,
  • American College of Emergency Physicians ,
  • American Association of Cardiovascular and Pulmonary Rehabilitation ,
  • Society for Cardiovascular Angiography and Interventions ,
  • Society of Hospital Medicine
  • Ladapo JA ,
  • Blecker S ,
  • Douglas PS ,
  • Hoffmann U ,
  • PROMISE Investigators
  • Institute of Medicine (US) Committee on Quality of Health Care in America ,
  • Corrigan J ,
  • Donaldson MS
  • Institute of Medicine (US) Committee on Diagnostic Error in Health Care ,
  • Miller BT ,
  • Graber ML ,
  • Franklin N ,
  • Sherbino J ,
  • Durning SJ ,
  • Dharmarajan K ,
  • Strait KM ,
  • Tinetti ME ,
  • Elstein AS ,
  • Shulman LS ,
  • Neufeld VR ,
  • Norman GR ,
  • Feightner JW ,
  • Barrows HS ,
  • Feightner JW
  • Kassirer JP ,
  • ↵ Gruppen LD, Woolliscroft JO, Wolf FM. The contribution of different components of the clinical encounter in generating and eliminating diagnostic hypotheses. In: Dabney DS, ed. Research in medical education: Proceedings of the annual conference on research in medical education. Association of American Medical Colleges, 1987:242-7.
  • Pelaccia T ,
  • Brush JE Jr . ,
  • Sibbald M ,
  • Coffin-Simpson T ,
  • Custers EJ ,
  • Bordage G ,
  • Schmidt HG ,
  • Boshuizen HP
  • Brooks LR ,
  • De Volder ML
  • Feltovich PJ ,
  • Hatala RM ,
  • Böckenholt U ,
  • Hilton DJ ,
  • Greene EJ ,
  • Neville AJ ,
  • Rikers RM ,
  • Penaforte JC ,
  • Coelho-Filho JM
  • Montgomery K
  • Hartshorne C ,
  • Diamond GA ,
  • Forrester JS
  • Krumholz HM
  • Ransohoff DF ,
  • Feinstein AR
  • Reichlin T ,
  • Hochholzer W ,
  • Bassetti S ,
  • Gibbons RJ ,
  • Genders TS ,
  • Steyerberg EW ,
  • Alkadhi H ,
  • CAD Consortium
  • Gigerenzer G ,
  • Gaissmaier W
  • Gilovich T ,
  • Griffin DW ,
  • Taylor-Fishwick JC ,
  • Casscells W ,
  • Schoenberger A ,
  • Kahneman D ,
  • Masicampo EJ ,
  • Baumeister RF
  • Stanovich KE
  • Shiffrin RM ,
  • Schneider W
  • Yarris LM ,
  • McIntyre LA ,
  • Monteiro SD ,
  • Mazzetti I ,
  • ALQahtani DA ,
  • Rotgans JI ,
  • Croskerry P
  • Redelmeier DA
  • Redelmeier DA ,
  • Ferris LE ,
  • Blumenthal-Barby JS ,
  • Berbaum KS ,
  • Schartz KM ,
  • Caldwell RT ,
  • van den Berge K ,
  • van Gog T ,
  • van Saase JL ,
  • Rosenthal D
  • Monteiro S ,
  • O’Rourke P ,
  • Alexander H
  • Rosenthal D ,
  • Berendonk C ,
  • Moulton CA ,
  • Lingard L ,
  • Merritt C ,
  • Penaforte JC
  • Splinter TA ,
  • de Carvalho-Filho MA ,
  • de Faria RMD ,
  • Friedman CP ,
  • Shimizu T ,
  • Matsumoto K ,
  • Berner ES ,
  • Webster GD ,
  • Shugerman AA ,
  • Kulasegaram K ,
  • Chartrand TL
  • Institute of Medicine (US) Committee on Understanding and Eliminating Racial and Ethnic Disparities in Health Care ,
  • Smedley BD ,
  • Schulman KA ,
  • Berlin JA ,
  • Harless W ,
  • Moskowitz GB
  • Mangione S ,
  • Coblentz CL ,
  • Issenberg SB ,
  • McGaghie WC ,
  • Gordon DL ,
  • Halperin JL ,
  • Williams ES ,
  • Mylopoulos M ,
  • Ericsson KA
  • Levinson AJ ,
  • Oczkowski WJ ,
  • Kulasegaram KM ,
  • Martimianakis MA ,
  • Whitehead CR ,
  • Wachter RM ,
  • LoGiudice A ,
  • Friedman C ,

diagnostic reasoning critical thinking

Critical thinking and diagnostic reasoning of the heart and cardiovascular system

Affiliation.

  • 1 Advanced Clinical Practitioner-Emergency department, Belfast Health and Social Care Trust.
  • PMID: 34761982
  • DOI: 10.12968/bjon.2021.30.20.1172

This is the second of two articles exploring assessment and clinical reasoning of conditions relating to the heart and cardiovascular system in the context of emergency care. In the last article, the structure and function of the heart was reviewed, and reference made to many of the conditions that may affect the heart. In addition, the common presenting complaints of cardiac conditions were highlighted, together with important aspects of the history for each symptom. The full cardiac examination was outlined. In this article, some of the common cardiac conditions will be discussed. These will be linked to common findings in the history, examination, and investigations.

Keywords: Acute coronary syndromes; Arrhythmias; Clinical examination; Syncope.

  • Cardiovascular System*
  • Emergency Medical Services*
  • Heart Diseases* / diagnosis
  • Access Provided by: Googlebot
  • Sign in or Create a Free Access Profile
  • Remote Access
  • Save figures into PowerPoint
  • Download tables as PDFs

Principles and Practice of Hospital Medicine, 2e

Chapter 8:  Diagnostic Reasoning and Decision Making

Laurence Beer; Lucas Golub; Dustin T. Smith

  • Download Chapter PDF

Disclaimer: These citations have been automatically generated based on the information we have and it may not be 100% accurate. Please consult the latest official manual style if you have any questions regarding the format accuracy.

Download citation file:

  • Search Book

Jump to a Section

Introduction, diagnostic reasoning.

  • DIAGNOSTIC DECISION MAKING
  • SUGGESTED READINGS
  • ONLINE RESOURCE
  • Full Chapter
  • Supplementary Content

Diagnosis is the art of identifying a disease by the signs, symptoms, and test results of a patient. Diagnosis stems from the Greek word, diagignoskein , which means to distinguish or discern. Indeed, the ability to distinguish or discern a patient’s underlying illness is critical to being an effective clinician as a hospital medicine provider. In many cases, hospitalized patients may be quite complicated with multiple competing possible reasons to explain their underlying signs or symptoms. Patients do not always read textbook (ie, they may not always describe their symptoms or have findings on exam that are pathognomonic or as classically described). Therefore, diagnostic reasoning and diagnostic decision making are crucial skills for hospital medicine providers. In addition, cognitive biases exist and diagnostic errors occur when there is any mistake or failure in the diagnostic process that leads to a misdiagnosis, a missed diagnosis, or a delayed diagnosis. This chapter will discuss diagnostic reasoning and diagnostic decision making.

CLINICAL REASONING

Clinical reasoning is the process where a clinician applies reasoning in combination with the clinician’s knowledge and skills ( Figure 8-1 ). Clinical reasoning is a constant process that does not end when the diagnosis has been made. It may be considered complete upon autopsy or when a gold standard has confirmed a diagnosis, but it is important to acknowledge that in many instances the gold standard is not 100% accurate. In the hospital setting clinicians are operating under a running diagnosis until the diagnosis has been confirmed and/or until the patient has improved both subjectively and objectively. Sometimes patients do not improve when a treatment strategy is implemented; thus, it is important that providers continuously use clinical reasoning skills as information is collected in an attempt to verify the diagnosis. Once a treatment or workup plan has been implemented, it is crucial to reassess the patient’s response to this to further confirm if the correct diagnosis or treatment strategy has been made ( Figure 8-2 ). If the diagnosis does not appear correct or the treatment strategy is failing, the clinician must review the information and data collected and reconsider the other possible diagnoses to explain a patient’s presenting signs and symptoms. Hence, a clinician’s ability to successfully reason and diagnose is in some ways anchored by that clinician’s ability to create an adequate differential diagnosis.

Clinical reasoning .

image

The process of clinical reasoning .

image

DIFFERENTIAL DIAGNOSIS

Sign in or create a free Access profile below to access even more exclusive content.

With an Access profile, you can save and manage favorites from your personal dashboard, complete case quizzes, review Q&A, and take these feature on the go with our Access app.

Download the Access App: iOS | Android

Pop-up div Successfully Displayed

This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.

Please Wait

  • Recommended
  • Recently Viewed
  • Collections
  • Review Questions
  • Create a Free Profile

Evaluating and Enhancing Critical Thinking in the Workplace: GAAT – An Essential Tool

In an era of information overload and misinformation, critical thinking has become a strategic skill for businesses. It enables employees to navigate with discernment, solve complex problems, and make well-informed decisions. This skill isn’t reserved for a select few; it is crucial for everyone, especially in a professional environment where decisions must be well-founded and informed.

Thinking critically and rationally, while being aware of our cognitive biases, is fundamental. How can you ensure that this skill is deeply embedded in your organisation? The GAAT (General Analytical Aptitude Test) is a key tool to evaluate and enhance the critical thinking of your employees, preparing them to successfully tackle professional challenges.

GAAT: More Than Just a Logic Test, a Test of Critical Thinking

The GAAT critical reasoning test is designed to evaluate an individual’s ability to think clearly, logically, and independently. It measures four specific skills: argument evaluation, critical analysis, deductive reasoning, and inductive reasoning. The test assesses how a person analyses, interprets, and evaluates information of various types to draw rational conclusions. By using scenarios that closely resemble professional contexts, GAAT provides a comprehensive view of the cognitive skills needed to solve problems, analyse arguments, and make objective decisions in demanding environments. Although not widely known, the GAAT psychometric test deserves to be explored in depth to understand its undeniable advantages across a variety of contexts.

Evaluating Critical Thinking in Recruitment: A Key Test for High-Level Profiles

The GAAT is recommended for recruiting positions of responsibility as well as for administrative, technical, or scientific roles that require critical thinking and a methodical approach to processing information. The test helps differentiate candidates by assessing their ability to analyse problems clearly and make relevant decisions in complex environments.

How to Use GAAT in Recruitment

The GAAT is generally reserved for shortlisted candidates to ensure they have the desired reasoning skills for the role. To ensure the test performance reflects the candidates' abilities, inform them of the test’s duration (40 minutes) and the mental effort required. Since the test is timed, candidates will need to answer a series of questions that require concentration and sharp thinking. It is recommended they choose a moment when they feel well-rested and free from distractions.

For instance, when recruiting a project manager, focus on two key GAAT factors: the ability to "evaluate arguments" and "think critically". A project manager often needs to assess various proposals and solutions presented by their team. Strong argument evaluation skills enable them to identify the strengths and weaknesses of these proposals and make informed choices. Additionally, a candidate with solid critical analysis skills will be better equipped to anticipate risks, assess the implications of different actions, and avoid potential obstacles, allowing them to handle project challenges more effectively.

Evaluating Critical Thinking for Internal Promotions: Ensuring Rational and Targeted Decisions

Internal mobility is often preferred over external recruitment, as companies already know the candidates and can vouch for their level of commitment. However, internal mobility is prone to biases, especially emotional ones, as familiarity with the candidates may cloud judgment about their actual suitability for a role. This becomes even more problematic for strategic positions, where using tests to ensure objective assessment is in the company’s best interest. The GAAT provides insights into the subtleties of a candidate’s thinking, offering a precise analysis of their reasoning style, which is crucial for ensuring performance, particularly in high-responsibility roles.

How to Use GAAT for Internal Mobility

In internal mobility, the same care should be taken as in recruitment to ensure an optimal experience for the candidate. Regarding results, if, for example, the goal is to promote someone internally to a financial director role, particular attention should be given to "critical analysis" and "deductive and inductive reasoning". Critical analysis will be crucial for risk assessment, identifying weaknesses in financial plans, and proposing solutions to strengthen them. Strong deductive reasoning will ensure the accurate application of accounting principles to specific situations, guaranteeing that financial decisions align with company policies and legal requirements. Inductive reasoning, on the other hand, will be essential for excelling in strategic planning and making solid financial forecasts.

Identifying Student Potential: For Admissions Tests

Schools offering high-level programmes are generally selective and look for candidates with strong potential who can meet the rigorous demands of their courses and excel in their studies. To achieve this, they need effective assessment tools to select those with the necessary intellectual abilities. The GAAT is highly valued by these institutions, as the test’s high level of difficulty helps distinguish top candidates while ensuring objective selection.

How to Use GAAT for Admissions Tests

Unlike in recruitment or internal mobility, analysing GAAT reports individually can be challenging due to the large number of students taking the test. Therefore, the analysis is done on a large scale, based on the scores obtained. A candidate with low scores across several factors may struggle to keep up with the demands of a rigorous programme, indicating they may not yet be ready for the high academic expectations of a prestigious school. Hence, institutions tend to favour candidates with high, well-rounded scores across all four factors, as they are more likely to excel in a demanding programme. 

However, interpreting the results must also consider the specific needs of the programmes and how these skills translate into academic performance. For instance, courses in political science and journalism will prioritise "argument evaluation" and "critical analysis", whereas engineering or mathematics programmes will favour candidates with strong "deductive reasoning".

Therefore, if students applying to an engineering programme have less impressive results in areas less relevant to their studies, this should be considered in the selection process. Finally, since the skills measured by GAAT can be improved with practice, test results can also help identify areas for development and support students in strengthening their abilities.

Helen Simard

diagnostic reasoning critical thinking

Revealing Professional Interests with Tailored Assessments

Back at work and already counting down to your next holiday, dreaming of immersing yourself in your favorite hobby? But what is it about this hobby that captivates you so much? What if this passion could become your career—something you could enjoy every day?

diagnostic reasoning critical thinking

Are psychometric tests adapted to the culture of each country?

What according to you makes a good accountant? Perhaps someone who is good at planning and organising, analytical thinking and is also able to manage risks effectively? I would have said exactly the same!

diagnostic reasoning critical thinking

Common mistakes in using psychometric tests

Who has never been subjected to a psychometric test? There are few people who can still claim this. And if it’s the case, it’s only a matter of time.

CENTRAL TEST NEWSLETTER

Receive our news and exclusive downloadable content every month!

images newsletter

IMAGES

  1. Unit 4 Discussion 1Critical Thinking and Diagnostic ReasoningPle.docx

    diagnostic reasoning critical thinking

  2. Comparison of critical thinking, clinical reasoning, diagnostic...

    diagnostic reasoning critical thinking

  3. Comparison of critical thinking, clinical reasoning, diagnostic...

    diagnostic reasoning critical thinking

  4. BSBCRT511 Develop critical thinking in others

    diagnostic reasoning critical thinking

  5. Key Elements of the Clinical Diagnostic Reasoning Process.

    diagnostic reasoning critical thinking

  6. An Introduction to Clinical Reasoning (Strong Diagnosis)

    diagnostic reasoning critical thinking

VIDEO

  1. Critical Thinking: Question, Analyze, Grow

  2. Brain test -- logical Reasoning ---- critical thinking skills _logicalreasoning #gk #gkhindi #2024

  3. Brain test // Logical reasoning // Critical thinking// 🤔🤔🤔🤔

  4. Problem Solving vs. Experience

  5. Scientific Reasoning/Critical Thinking in Labs PER Interest Group Feb 23, 2024

  6. Free Critical Reasoning Resources That ACTUALLY Work!

COMMENTS

  1. Scoping review: Diagnostic reasoning as a component of

    Comparison of critical thinking, clinical reasoning, diagnostic reasoning, and clinical judgement. Theoretical frameworks used to explain DR have been well described and distinguish multiple cognitive processes, including how clinical content knowledge is stored and accessed (see, for example, Schmidt & Rikers, 2007); dual process reasoning (see Evans, 2008); and thinking about thinking (see ...

  2. Diagnostic reasoning in internal medicine: a practical reappraisal

    Particularly, the high risk of biases must be counteracted by de-biasing techniques, which require constant critical thinking. In this review, we discuss critically the current knowledge regarding diagnostic reasoning from an internal medicine perspective. Keywords: Clinical reasoning, Diagnosis, Internal medicine. Introduction

  3. Advanced practice: critical thinking and clinical reasoning

    Clinical reasoning is a multi-faceted and complex construct, the understanding of which has emerged from multiple fields outside of healthcare literature, primarily the psychological and behavioural sciences. The application of clinical reasoning is central to the advanced non-medical practitioner (ANMP) role, as complex patient caseloads with ...

  4. Clinical Reasoning, Decisionmaking, and Action: Thinking

    By holding up critical thinking as a large umbrella for different modes of thinking, students can easily misconstrue the logic and purposes of different modes of thinking. Clinicians and scientists alike need multiple thinking strategies, such as critical thinking, clinical judgment, diagnostic reasoning, deliberative rationality, scientific ...

  5. Teaching Clinical Reasoning and Critical Thinking

    Teaching clinical reasoning is challenging, particularly in the time-pressured and complicated environment of the ICU. Clinical reasoning is a complex process in which one identifies and prioritizes pertinent clinical data to develop a hypothesis and a plan to confirm or refute that hypothesis. Clinical reasoning is related to and dependent on critical thinking skills, which are defined as one ...

  6. Critical Thinking in Critical Care: Five Strategies to Improve

    Critical thinking, the capacity to be deliberate about thinking, is increasingly the focus of undergraduate medical education, but is not commonly addressed in graduate medical education. ... Curristan S. Dual-process cognitive interventions to enhance diagnostic reasoning: a systematic review. BMJ Qual Saf. 2016;25:808-820. doi: 10.1136 ...

  7. PDF What Is Critical Thinking, Clinical Reasoning, and Clinical

    1. Describe critical thinking (CT), clinical reasoning, and clinical judgment in your own words, based on the descriptions in this chapter. 2. Give at least three reasons why CT skills are essential for stu-dents and nurses. 3. Explain (or map) how the following terms are related to one another: critical thinking, clinical reasoning, clinical ...

  8. Clinical problem solving and diagnostic decision making: selective

    This is the fourth in a series of five articles This article reviews our current understanding of the cognitive processes involved in diagnostic reasoning in clinical medicine. It describes and analyses the psychological processes employed in identifying and solving diagnostic problems and reviews errors and pitfalls in diagnostic reasoning in the light of two particularly influential ...

  9. Advanced practice: critical thinking and clinical reasoning

    Rencic et al (2020) defined clinical reasoning as 'a complex ability, requiring both declarative and procedural knowledge, such as physical examination and communication skills'. A plethora of literature exists surrounding this topic, with a recent systematic review identifying 625 papers, spanning 47 years, across the health professions ...

  10. Diagnostic Reasoning

    Diagnostic reasoning: ... 2.4 Reflection and clinical reasoning. Clinical reasoning is the thinking and decision making associated with clinical practice so that the best-judged action is ... reflection plays a critical role in the development of advancing practice in physiotherapists and needs to be combined with scholarly activity and ...

  11. Principles of diagnostic reasoning

    Effective diagnostic reasoning often utilises both system 1 and system 2 thinking and requires a combination of experience and skills (pattern recognition, critical thinking, communication skills, evidence-based practice, teamwork and reflection) [8] .The reasoning process needs to be considered as comprising four discrete stages: information gathering, hypothesis generation, hypothesis ...

  12. Diagnostic reasoning in internal medicine: a practical reappraisal

    Models of diagnostic reasoning and hypothesis generation. The configuration in a doctor's mind of a particular script, whether it is correct or not, is essential for the hypothesis generation of disease (Fig. 1), which must subsequently be confirmed or ruled out.These early stages of the diagnostic process are crucial because an appropriate problem representation, to which a consistent and ...

  13. The Diagnostic Process

    This chapter provides an overview of diagnosis in health care, including the committee's conceptual model of the diagnostic process and a review of clinical reasoning. Diagnosis has important implications for patient care, research, and policy. Diagnosis has been described as both a process and a classification scheme, or a "pre-existing set of categories agreed upon by the medical ...

  14. Educational Strategies to Promote Clinical Diagnostic Reasoning

    Clinical teachers can use several strategies to promote the development of strong diagnostic reasoning skills. The recommendations that follow are drawn from research on how doctors reason. 1-4 ...

  15. Teaching Diagnostic Reasoning to Advanced Practice

    Many nurses transitioning to advanced practice roles struggle in gaining competence in diagnostic reasoning, a core skill requiring integration and application of complex patient data. Diagnostic error, a common cause of medical error, is often a result of faulty interpretation, synthesis, or judgment of available information. Nurse educators, confronted with decreased clinical site ...

  16. Teaching Clinical Reasoning and Critical Thinking

    To foster clinical reasoning and critical thinking skills, faculty must help learners develop analytic reasoning skills and habits of life-long self-directed learning. ... Constructing the map is a concrete strategy to explicitly teach diagnostic reasoning. In the context of time-pressured work rounds, one may focus on a portion of a map, such ...

  17. Diagnostic reasoning in cardiovascular medicine

    Clinical reasoning is an umbrella term that includes both diagnostic reasoning and management reasoning. Diagnostic reasoning is a classification task with various levels of specificity (for example, AMI versus STEMI versus STEMI with complete occlusion of the first obtuse marginal artery). Diagnostic reasoning has an objective endpoint ...

  18. 28.2: Developing Critical Thinking Skills

    Types of Thinking Used in Nursing. Nurses make decisions while providing patient care by using critical thinking and clinical reasoning. In nursing, critical thinking is a broad term that includes reasoning about clinical issues such as teamwork, collaboration, and streamlining workflow." On the other hand, clinical reasoning is defined as a complex cognitive process that uses formal and ...

  19. Full article: Clinical reasoning: What do nurses, physicians

    Physicians, nurses, and students of both professions covered in their definitions the themes of decision making, and diagnostic reasoning, (e.g., "It is the clinical decision making of the physician" [13, physician], "the aim of being able to formulate a nursing diagnosis" [27, nurse]). All groups covered various aspects of patient ...

  20. Critical thinking and diagnostic reasoning of the heart and

    Heart Diseases* / diagnosis. Humans. Syncope. Thinking. This is the second of two articles exploring assessment and clinical reasoning of conditions relating to the heart and cardiovascular system in the context of emergency care. In the last article, the structure and function of the heart was reviewed, and reference made to many of the ...

  21. Chapter 8: Diagnostic Reasoning and Decision Making

    Indeed, the ability to distinguish or discern a patient's underlying illness is critical to being an effective clinician as a hospital medicine provider. In many cases, hospitalized patients may be quite complicated with multiple competing possible reasons to explain their underlying signs or symptoms. ... Therefore, diagnostic reasoning and ...

  22. Evaluating and Enhancing Critical Thinking in the Workplace: GAAT

    GAAT: More Than Just a Logic Test, a Test of Critical Thinking. The GAAT critical reasoning test is designed to evaluate an individual's ability to think clearly, logically, and independently. It measures four specific skills: argument evaluation, critical analysis, deductive reasoning, and inductive reasoning.