Ac INNOVATION ABSTRACTS ker ¢ IG . t dite j oS CAN Published by the National Institute for Staff and Organizational L' yf ji ont ’“ With support from the W. K. Kellogg Foundation and Sid Wi Richardson Foundation COST/BENEFIT TESTING The Problem The testing of mathematical concepts is a difficult task. Are multiple choice responses reflective of a student’s knowledge? Perhaps he/she knew the entire solution except for a mere starting formula or partial hint of a word problem setup. Are time pressure exams reflective of the "non-academic" world, or is accuracy and persistence and the skill of buying information more valid indicators? Perhaps fewer, more extensive problems are the way to go. But answering two problems wrong on a five-problem exam would fail the typical student, given the norm in grading policies. What is the answer to this dilemma? Is there another more effective way to test students’ content and process knowledge? Cost/Benefit Testing A possible solution lies in the usage of "Cost/Benefit Testing" (CB-Testing). Employing this technique, an instructor selects a few extensive problems that reflect the theory being evaluated. The number of problems chosen should be no more than can be completed by 90% of the class in the time allotted, eliminating the artificial and somewhat unfair time constraint problem that exams usually pose. Next, a scheme is developed whereby students can "buy information" from the teacher through the use of "penalty points" (pp’s). For example, a right or wrong gesture from the instructor may cost 1 pp on a 10-point problem. A forgotten formula may cost 2 pp’s; a diagram setup, 4 pp’s; a word problem setup with all equations unsolved, 5 pp’s; and so on. The students may buy information during the middle third of the exam time. Thus, during an hour and a half exam, the instructor would allow pp purchases from the 30th through the 60th minute. This policy prevents last minute rushes and requires the student to make his own cost/benefit decision at the right point in the interactive exam session. Surprisingly, 50% of a typical class takes advantage of this approach. Students enjoy it as a way to unfreeze on what may be a difficult problem. They begin to rely on their own thinking abilities in order to understand how to deal with risk and the cost associated with it. They feel that the exam more accurately reflects their knowledge and abilities. And the instructor takes pride in seeing a "slow" student solve at least half of a difficult problem. All in all, it is a win/win situation, except for one -minor difficulty. If instructors help the students on the exam, does this not skew the distribution unfairly to higher grades? This would definitely be the case if we constrained ourselves to the conventional grading policy of 90+ = A, 80-90 = B, etc. Even a "normal distribution curve fitting policy" may not deliver a fair and motivational distribution of grades. So a question remains as to what method of grading would be as equitable and motivational as the above testing technique seems to be. Cluster Grading The ideal grading technique that is fair and motivational and that fits perfectly in conjunction with CB- Testing is "Cluster Grading." If the exam is difficult enough despite the CB technique, then the distribution of grades should be scattered throughout the 0-100 spectrum in a manner that would reveal "point gaps" between groups. One such actual distribution is as follows: 94 94 92 *** 89 89 88 87 86 *** 84 *** 82 81 80 80 79 *** 77 77 *** 75 74 73 73:73:72 71 70 70 69 ™ 66 65 *** 63 62 *** 60 59 59 *** 54 54 *** 52 51 *** 47 46 Wo} Community College Leadership Program, The University of Texas at Austin, EDB 348, Austin, Texas 78712