Demonstrating the value of the use of ICTs in teaching and learning, in particular, demonstrating a causative link between the ICT use and outcomes for students, has always been problematic because of the disruptive nature of the technology itself, and the fact that traditional measures of success/achievement are likely to be no longer valid.
There’s been so much discussion in recent times about the benefits of all students having their own computer, or 1-1 initiatives as they’ve become known. My personal view on this has always been mixed.
On the one hand I can see significant benefits in each student having their own computer – although many of the initial arguments for this are up for scrutiny in an age of cloud computing and where a range of devices can be used to connect to the cloud, not just a laptop.
On the other hand, my observations of what happens in many 1-1 computing environments is that the full benefit of the computer in the context of classroom learning is seldom fully realised because there is little or no change in the basic pedagogy, and so the computer ends up becoming a substitute for the exercise book or reference book in a traditional pedagogical approach. Further, the focus on individual use of the laptop/computer may limit the opportunities created for collaborative activity and team-work.
After reading TechLearning’s latest publication But Does it Work? – Evaluating Our Nation’s One-to-One Initiatives, I’m still of two minds.
This eBook explores what we are learning from local, state-level and national research into the impact of one-to-one computing on students, teachers and schools. To what degree are these ambitious programs living up to their promise? How can they be improved? And how do we build evaluation into all our endeavors going forward so that we maximize results and improve understanding about what works and what doesn’t when it comes to education and technology?
The report synthesizes several evaluative studies, at national, state and district level, each of which paints a reasonably rosey picture – until you read a little more detail to see just what evaluation criteria is being applied. This is where the problem occurs, not with the issue of the 1-1 programmes themselves, but with the way(s) in which they are evaluated.
The America’s Digital Schools report, for instance, claims that 1-1 implementations have been a roaring success, with 78 percent of the 1:1 districts in the 2008 report saying that they had seen “moderate to
significant improvement” in student achievement as a result of their program. This all sounds great until you read further that the specific measurement tools they used to get this figure were primarily student, teacher and parent feedback, plus improved attendance statistics. Other measures such as high stakes test scores, district benchmark exams and drop-out rate declines were used less frequently. Granted, stakeholder satisfaction is an important indicator of success – but there need to be more objective measures also, as the report itself points out, “many districts shy away from high-stakes testing and benchmarks “because of the risk of failure.” This is not to say that high stakes testing and benchmark exams are the only way to achieve a more objective measure.
One of the documents linked to in the report is the Evaluation of the Texas Technology Immersion Pilot (published January 2008 – available as a PDF download). I found this a particularly interesting read because of how they define what they call the theory of technology immersion, and the methodology they used to evaluate the effectiveness of their approach. The report outlines the observed and measured effects on teachers and teaching, learners and learning and on student achievement. There’s too much for me to do justice to the report in a few sentences here – but two things did stand out for me.
The first was a repeated observation that improvement occurred over time (in the case of this study, 3 years) and that the improvements in many cases didn’t begin to show until the end of the second year. This should provide a caution to those looking for ‘quick-fix’ measures of improvement over shorter time-frames. Debbie Rice, director of technology at Auburn City Schools in Alabama (one of the districts referred to) concludes;
“Overwhelmingly, parents and other stakeholder groups are expressing a very strong desire to continue the initiative. People who were previously lacksadaisical are becoming vocal that having individual laptops for students and staff is an absolute necessity, not a luxury.”
In the Auburn district the indicators of success that are emerging include:
- Engaged students;
- Energized teachers;
- Excited parents;
- Increased enrollment;
- Decreased discipline problems;
- An increase in inquiry-based, exploratory learning and other pedagogical best practices.
(access a three-part White Paper and other information about Auburn’s 1:1 program at the K-12 Computing Blueprint site)
The second thing in this report that struck me was the finding that students who had greater access to laptops and used laptops for learning to a greater extent, especially outside of school, had significantly higher [standardised] reading and mathematics scores. This highlights for me where I believe the real value of a 1-1 programme lies, not by focusing solely on the provision of laptops/computers for each child within the classroom learning environment, but with the empowerment that comes through having access to work with and through ICTs in all learning contexts, to have a genuinely personal learning environment (PLE) that is customised and populated to enable learning at any time, in any place and at any pace that suits the learner. The classroom then becomes one of those learning environments where some (at least) of the time may be better spent involved in more social, collaborative and participatory experiences.
Whatever your thoughts or experiences are about 1-1 computer programmes, one thing that is clear from reading this report and the associated reports referred to within it is that we need to be clear, when implementing such programmes, exactly what our objectives are for the programme, and then set about ensuring that what we apply as measures of success are appropriate and take into account the disruptive nature of the technological intervention.