# INNOVATION ABSTRACTS ces eb os NATIONAL INSTITUTE FOR STAFF-AND ORGANIZATIONAL DEVELOPMENT (NISOD), COLLEGE OF EDUCATION, THE: VOLUME XVI, NUMBER 18 Tag ‘OF TEXAS AT AUSTIN + WITH SUPPORT FROM THE W. K. KELLOGG FOUNDATION AND THE SID W. RICHARDSON FOUNDATION Does All This Technology Make a Difference? Computer use in the “real world” has grown at a dizzying pace. We encounter computers and computer technology everywhere—at the check-out stand in point- of-sale terminals, in our cars, in our televisions, even in our toasters and coffeepots. Few jobs have not been impacted by the advances in technology. Community college students expect to see technology put to extensive use in their colleges as well; by and large, colleges have accommodated them. While not every college can boast a computer on every faculty desk, or a campus-wide network or information system, virtually every commu- nity college has dozens, if not hundreds, of computers. Computer labs and classrooms are found on almost every campus, and they are increasingly being devoted to teaching subject matter having little to do with programming or computer literacy. A staggering number of educational software titles are available, and hundreds of faculty hours have been devoted to searching through these titles to find the right “fit” for their curricula. As the technology has advanced, more faculty have been excited by the possibilities. Words and phrases like “interactivity,” “multi-media,” the “virtual classroom,” and “electronic learning communities” have entered the teaching vocabulary. Nationally, evidence suggests that the application of technology to instruction in commu- nity colleges is growing rapidly. Faculty development centers, training programs, and instructional computing labs are proliferating. Unfortunately, in an era of flat or declining overall resources, technology spending comes at the expense of other possible initiatives, and community colleges must ask, “Are we doing the right thing here? Are the dollars we are putting into computers and software making a difference where it counts—with students?” Teachers and program administrators are trying to provide answers to these questions, and they are not easy to answer. Computer-aided instruction (CAT) is complex; many factors impact the learning process and can affect Its outcomes The Traditional Experimental Model Despite these complexities, the predominate approach used in most studies of CAT is the classic experimental design that compares a treatment and control group on gain scores or pre/post measures of learning or achieve- ment. Even when sophisticated statistical techniques are used, the results are often inconclusive, hard to interpret, and of little value to decision makers. The problem is that the effect of CAI (or any teaching strategy) is difficult to isolate—and isolating the variable of interest is integral to using an experimental design. Such isolation is difficult because other variables, which exist in any learning situation, interact with and confound the effect of the teaching strategy. These variables are difficult to control across groups, especially groups large enough to ensure sufficient statistical power. They include, but are not limited to, such diverse factors as the lab aesthetics and environment, the appro- priateness of the hardware, the training of the teacher and staff, the involvement of the teacher, quality and content of the orientations, student attendance, the fit between the computer activities and the learning objec- tives, time on task—the list goes on and on. Reasonable questions a reader might ask of a study concluding that a CAI approach was not significantly different from traditional teaching approaches include: “Is the reason for these findings that the software is not terribly useful or effective? Or is it that student keyboarding skills or insufficient lab time inhibited the class from making use of the full power of the software?” The reality is that, despite the proliferation of computers and computer technology in the worlds of work and commerce, educa- tors are still learning how to effectively apply computer technology to learning. Because the use of technology ts still in a formative stage, evaluations of CAI need to address process-oriented, formative concerns—and the traditional experimental pre/post design is not well- suited to that task. The question evaluators should ask is not “Does CAI work?” but “How does CAT work?” And that kind of question requires a different approach. A Different View Community colleges need evaluation models that will help them understand how to most effectively use CA] by asking the kinds of questions that will illuminate how its use may constrain or augment the myriad of factors affecting the process of learning. Evaluation of CAI should involve all of the many stakeholders in its use— faculty, students, and lab staff—in a way that will provide formative insight into how all aspects of technol- ogy use can be improved. The key difference between a broad-scope evaluation and the traditional research model is in the number of questions asked. CAI evaluation should pose many os THE NATIONAL INSTITUTE FOR STAFF AND ORGANIZATIONAL DEVELOPMENT (NISOD) WIF Community ( College Leadership ‘Program, Department of Educational Administration Se Oe eee Oe ee ei a IO7KO