Loading...
Please wait, while we are loading the content...
Similar Documents
What can we learn from facilitator and student perceptions of facilitation skills and roles in the first year of a problem-based learning curriculum ?
| Content Provider | Semantic Scholar |
|---|---|
| Author | Bloch, Ralph F. Hofer, Daniel Feller, Sabine Hodel, Maria |
| Copyright Year | 2016 |
| Abstract | Background: Diagnostic reasoning is a key competence of physicians. We explored the effects of knowledge, practice and additional clinical information on strategy, redundancy and accuracy of diagnosing a peripheral neurological defect in the hand based on sensory examination. Method: Using an interactive computer simulation that includes 21 unique cases with seven sensory loss patterns and either concordant, neutral or discordant textual information, 21 3rd year medical students, 21 6th year and 21 senior neurology residents each examined 15 cases over the course of one session. An additional 23 psychology students examined 24 cases over two sessions, 12 cases per session. Subjects also took a seven-item MCQ exam of seven classical patterns presented visually. Results: Knowledge of sensory patterns and diagnostic accuracy are highly correlated within groups (R2 = 0.64). The total amount of information gathered for incorrect diagnoses is no lower than that for correct diagnoses. Residents require significantly fewer tests than either psychology or 6th year students, who in turn require fewer than the 3rd year students (p < 0.001). The diagnostic accuracy of subjects is affected both by level of training (p < 0.001) and concordance of clinical information (p < 0.001). For discordant cases, refutation testing occurs significantly in 6th year students (p < 0.001) and residents (p < 0.01), but not in psychology or 3rd year students. Conversely, there is a stable 55% excess of confirmatory testing, independent of training or concordance. Conclusions: Knowledge and practice are both important for diagnostic success. For complex diagnostic situations reasoning components employing redundancy seem more essential than those using strategy. Background A major part of the undergraduate medical curriculum is dedicated to teaching the art and science of diagnosing illness and disease. Furthermore, when assessing the clinical competence of medical students, examiners must infer knowledge and reasoning skills from the behavior and the responses of the candidates. It stands to reason then that medical teachers should possess a thorough understanding of diagnostic reasoning as a "basic science" of medical education. In reality, however, our comprehension of the diagnostic reasoning process is hazy at best. Published: 24 January 2003 BMC Medical Education 2003, 3:1 Received: 23 July 2002 Accepted: 24 January 2003 This article is available from: http://www.biomedcentral.com/1472-6920/3/1 © 2003 Bloch et al; licensee BioMed Central Ltd. This is an Open Access article: verbatim copying and redistribution of this article are permitted in all media for any purpose, provided this notice is preserved along with the article's original URL. Page 1 of 12 (page number not for citation purposes) BMC Medical Education 2003, 3 http://www.biomedcentral.com/1472-6920/3/1 The present study attempts to explore diagnostic reasoning by analyzing detailed recorded data-gathering behavior of experimental subjects with different levels of expertise in a computer simulation of patients with neurological lesions of the peripheral nervous supply to the hand. Serious reasoning research started in psychology [1] during the 1950s. It has taken another 20 years for diagnostic reasoning to become an area of empirical research in medicine [2,3]. At a time when pragmatic medical educators believed in the existence of generic problem-solving skills, diagnostic reasoning research reestablished the primacy of content specific knowledge [4]. Initially research evolved along two intertwined threads, which alternatively supported and confused each other: the reasoning by (medical) experts and the reasoning by computers. By now, these two fields of research have largely gone their separate ways. Three factors (Table 1.) have determined the type of experimental studies of diagnostic reasoning: Firstly the subjects studied, secondly the clinical information provided to subjects both by content and method, and thirdly the products of reasoning subjected to analysis. This type of research is very labor intensive and, consequently, expensive. Thus it is difficult to collect sufficient data to reach adequate statistical power based on diagnostic success and process items alone. As a consequence, diagnostic reasoning research leans heavily on recall, introspection and reflection data [17]. It comes, therefore, as no surprise that the theories derived from this research tend towards models of semantic, analytical reasoning [18,19]. The literature is replete with a panoply of cognitive structures [20] – mainly semantic in nature – that are supposed to underlie diagnostic reasoning. The situation may be obscured further by the effect of social desirability bias, which may restrain experimental subjects from admitting to employing less than superlative reasoning strategies. There is ample evidence [21,22] that analytical, semantic models alone do not fully explain diagnostic reasoning. Research based primarily on semantic recall, introspection and reflection contains blind spots, when it comes to unconscious and implicit reasoning processes that are not based on semantic information. Methods focusing on such processes are thus required to look beyond semantic networks. For further discussion, we define inference or inferential reasoning as: logical, algorithmic, mainly semantic, sequential, propositional, forward and/or backward directed, purposeful, open to reflection and introspection. In contrast, pattern recognition is: holographic, heuristic, mainly perceptual, parallel, redundant, unconscious, probabilistic and intuitive. Inferential reasoning is characterized by strategy, pattern recognition by redundancy. By "strategy" we mean a purposeful sequence of tests, where the specifics of the next test are selected on the basis of previous tests such as to return maximum new information. "Redundancy" on the other hand expresses the number of tests that fail to provide any new information for inference. A suitable experimental model should, therefore, involve a sufficient number of perceptual cues to allow for good statistical power. One such candidate is eye-movement scanning in the interpretation of histological slides or xrays. Unfortunately, the fact that the ocular axis is directed at a certain location on the image does not indicate, what is actually seen by the central visual field or that visual information is indeed being recorded and processed. We have selected a simple deterministic computer simulation involving the (sensory) neurological examination of the peripheral nervous system in the hand. The collected sequence of responses and coordinates of each sensory stimulus allow statistical inference on the reasoning strategies, be they inferential or based on pattern recognition. For this experiment we asked ourselves 5 questions: 1. How do subjects pick the specific locations on the hand to be tested (strategy)? Table 1: Factors in empirical research on diagnostic reasoning. Subjects studied Experts [5], outstanding physicians [2], physicians-at-risk [6], "typical physicians" [7], learners [10] with differing levels of expertise. Clinical information provided Sequential text fragments [8], x-rays [9,5], pictures [10] (e.g. dermatology [11], pathology), standardized patients [2], other simulations [12]. Products of reasoning analyzed Diagnostic success [13], items collected [4,14], recall [15,16], reflection, "thinking aloud", introspection [5] Page 2 of 12 (page number not for citation purposes) BMC Medical Education 2003, 3 http://www.biomedcentral.com/1472-6920/3/1 2. How many additional points in excess of what is required for strict inference, do they test before reaching a diagnosis (redundancy)? 3. How often is the selected diagnosis correct (accuracy)? 4. How are strategy, redundancy and accuracy related to knowledge and practice? 5. How are strategy, redundancy and accuracy affected, if subjects receive additional clinical information (symptoms and history) that is concordant, neutral or discordant with respect to the sensory pattern? The key to answering these questions is the ability to quantify the information revealed by each successive sensory test. The accepted measure of information content is entropy, as introduced by Claude Shannon [23] in 1948 (Appendix A, see Additional file 1). Specifically, it indicates the potentially available information not yet revealed by the test sequence. An entropy value of 1.0 indicates that none of the available diagnostic information has yet been revealed and that all diagnostic possibilities are still equally likely. Conversely, an entropy value of 0.0 indicates that all relevant diagnostic information has been revealed and that only one diagnosis remains possible. Figure 1 MCQ score and diagnostic accuracy fort the experimental groups. The values for psychology students in their initial session and in the second session, one week later, are plotted separately. y = 0.4386x + 0.4115 R = 0.6368 |
| File Format | PDF HTM / HTML |
| Alternate Webpage(s) | https://pure.bond.edu.au/ws/portalfiles/portal/26285082/The_role_of_strategy_and_redundancy_in_diagnostic_reasoning.pdf |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |