The Software Report: Digging Deeper – by Robert Kadel
International for Teaching in Education: Learning & Leading with Technology, September/October 2007, Pages 38 & 39.
This article looks at the recent report given to Congress by the U. S. Department of Education on “Effectiveness of Reading and Mathematics Software Products: Findings from the First Student Cohort.” The article was interesting in that on the surface it seems to say there is little effect of technology in the classroom, but as the reader enters into the article the indications are that this in not the case. Much like the title of this article there is a need to ‘dig deeper’ into the actual guts of this report. The author has pointed out many good points to examine. Two questions in particular occurred to me as I was reading this article.
1. Were the control groups actually control groups? When this study was undertaken it appears that little or no effort was made to create a ‘clean’ test control group. I think this is perfectly reasonable, in that no two classrooms are going to be identical. Classrooms are live environments, and as such it is very difficult to control. Anytime you involve people in a study, you have to expect skews based on personal details. As the author states “To be a ‘true’ control group, the study authors would have had to force all control classrooms not to use any technology products at all.” Since this would defeat the purpose of introducing technology into classroom environments and as stated, generally impossible to enforce, I cannot see a true “clean” test group ever being a possibility. I do think that the report should have stated, as the author of the article does, “…this isn’t so much a comparison of ed tech versus no ed tech. It’s a comparison of the use of certain software programs in classrooms to classrooms where other software may or may not have been used.”
2. How was usage gauged during the test? The grade levels were clearly defined in the study, but how was the usage monitored? Two things the author stated were very telling about this study. The first point was that the student used the software less that was intended. This would indicate that there was some form of monitoring occurring during the study. But, not knowing the instructions given by the developers is a problem, because we don’t know if it was an excessive amount of time or perhaps a reasonable amount of training time. If one were to assume that the training time was a reasonable amount of time, it is disturbing that the software was not utilized as recommended. Was it proportionally the same across the grades studied? The other point dealing with usage I found was the decline in usage after initial training by the teachers. I find this very ironic. I have seen numerous occurrences where we train individuals on the software, but without follow up training or Q & A sessions, the software is relegated to the closet. This author had concerns about the same issues. At the time that the product is actually used is when training needs to be available to the educators. If they don’t understand the software, it follows that they will not utilize the software correctly in the classroom.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment