The use of computers in education is much more a series of failures than success stories. I agree with Erik Duval that in general, in a large scale the impact of technology on the way people learn have been minimal. In open distant learning and military training (simulations) there are examples of success, but these models do not fit very well to school and university context. So, I wouldn’t call them “good examples”.
It can be claimed that from the learning perspective the only proof-of-concept cases of using computers in the school and university environments for learning, are the small-scale experiments with CSCL (Computer-supported Collaborative Learning) tools such as the classical CSILE (and Knowledge Forum), Belvedere and later the experiments made with web-based social software tools, such as Fle3 and blogs.
Why is the impact of technology on the way we learn so marginal, even though millions of dollars and Euros have been spent on to develop educational computer technology? Could it be that there has been some principle conceptual bias and all the minor changes made in to it do not help much, as the principle is wrong?
With an analogy: if you are sailing somewhere in equator and take a course by mistake to south, even that you should go north, it does not help much if you every year fix your course 5 degrees. You will still end-up to Antarctica.
Let’s try to make a critical analyse of the history of ICT in learning. How the history will look if we try to pull down the mental models and educational thinking behind the promises of different times?
I see four major phases in the history of using computers in education. The fifth: the era of social software and free and open content is still to come – I hope. The phases are:
Late 1970’s - early 1980’s: programming, drill and practice;
Late 1980’s - early 1990’s: computer based training (CBT) with multimedia;
Early