Loading...
Please wait, while we are loading the content...
Similar Documents
Learning from Transformational and Derivational Worked-out Examples - eScholarship
| Content Provider | Semantic Scholar |
|---|---|
| Author | Gerjets, Peter Scheiter, Katharina Kleinbeck, Stefan Schmid, Ute |
| Copyright Year | 2002 |
| Abstract | Learning from Transformational and Derivational Worked-out Examples Peter Gerjets (p.gerjets@iwm-kmrc.de) & Katharina Scheiter (k.scheiter@iwm-kmrc.de) Knowledge Media Research Center & Department of Psychology, University of Tuebingen Konrad-Adenauer-Strasse 40, 72072 Tuebingen, Germany Stefan Kleinbeck (Stefan.Kleinbeck@Psychologie.Uni-Freiburg.de) Department of Psychology, University of Freiburg Niemensstrasse 10, 79085 Freiburg im Breisgau, Germany Ute Schmid (schmid@informatik.uni-osnabrueck.de) Department of Computer Science, University of Osnabrueck Albrechtstrasse 28, 49069 Osnabruck, Germany In this research two different solution formats for instruc- tional worked-out examples were compared experimentally with regard to several measures of learning outcomes. A typical example problem from the domain of probability theory used for experimentation is the following: Problem statement: At the Olympics, 7 sprinters participate in the 100m-sprint. What is the probability of correctly guessing the winner of the gold, silver, and the bronze medal? The worked-out solution designed for this type of problem according to the transformational example format was in- spired by a structure mapping view of analogical transfer (Gentner, 1983). According to this view, transformations of complex problem representations into another (in terms of structure mapping/ schema induction) are pivotal processes for learning and problem solving. Instructional worked-out examples designed from this perspective may aim at con- veying structural problem features necessary to recognize problem categories. Problem categories comprise classes of isomorphic problems that can be transformed into another (using analogy) and that can be represented in a more ab- stract way (using problem-type schemas). A problem-type schema consists of information about the defining structural features and the appropriate solution procedures for a class of problems. In our experiments the transformational exam- ple solutions had the following structure: Problem features: Selection of 3 sprinters out of seven sprinters; order of selection is important; each sprinter can only be selected once (without replacement) Formula: A = n! / (n-k)! Inserting: n = 7, k = 3 ⇒ A = 7! / (7-3)! = 210 Result: 1/210 ≈ 0.48% Learners are known to have difficulties in acquiring struc- tural problem features and problem categories as well as in adapting solution procedures to novel problems that differ from the problem categories conveyed. Thus, we compared the transformational approach to a different instructional approach that was inspired by AI models of derivational analogy (Carbonell, 1984). The main idea of the deriva- tional approach is to convey knowledge on how to derive solutions for problems regardless of their problem category and to abandon the mapping/ categorization of problems as well as the application of category-specific solution proce- dures. In our experiments the basic rationale for the deriva- tional example format is to decompose a complex event into a sequence of individual events. The overall probability is calculated by multiplying the individual-event probabilities. Contrary to the transformational approach, the solution strategy conveyed in the derivational approach is not seen as a solution schema that is applied to the problem as a whole. Instead it is presented as a sequence of solution steps where each step can be justified (and modified) by concrete fea- tures of the problem at hand. Therefore, the derivational example format is characterized by a high modularity. The derivational example format had the following structure: Rationale: Calculate probability of correctly guessing the winner of each medal; each medal can be taken into account separately. Step 1: 7 possible choices (sprinters), 1 acceptable (winner) ⇒ 1/7 Step 2: 6 possible choices (winner can’t win two medals), 1 acceptable choice ⇒ 1/6 Additional steps: Analogous procedure Result: Overall probability: 1/7 * 1/6 * 1/5 = 1/210 ≈ .48% In order to compare both example formats two hypertext- based experiments using different dependent measures (e.g., learning time, example processing strategies, problem- solving performance, problem classification, problem com- parison) were conducted. The results show a clear superior- ity of the derivational example format with regard to learn- ing time and problem-solving performance. Acknowledgements This work was supported by the Deutsche Forschungsge- meinschaft . We thank R. Catrambone for helpful comments and D. Bruns, C. Kramer and V. Lange for conducting the experiments as well as S. Albers for programming. References Carbonell, J. G. (1984). Learning by analogy: Formulating and generalizing plans from past experience. In R. S. Michalski, J. G. Carbonell & T. M. Mitchell (Eds.), Ma- chine learning: An intelligence approach (pp. 137-161). Berlin: Springer. Gentner, D. (1983). Structure mapping: A theoretical framework for analogy. Cognitive Science, 7, 155-170. |
| File Format | PDF HTM / HTML |
| DOI | 10.4324/9781315782379-220 |
| Volume Number | 24 |
| Alternate Webpage(s) | https://cloudfront.escholarship.org/dist/prd/content/qt65b1f29c/qt65b1f29c.pdf?t=op34ny |
| Alternate Webpage(s) | https://cogsys.uni-bamberg.de/publications/2002_CogSci_Member_Abstract.pdf |
| Alternate Webpage(s) | https://doi.org/10.4324/9781315782379-220 |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |