Canadian Journal of Learning and Technology

Volume 32(3) Fall / automne 2006

Commentary on e-Learning Review

Margaret Haughey


Margaret Haughey is Vice President, Academic at Athabasca University. Correspondence regarding this commentary can be e-mailed to:

Given the extensive adoption of research orientations that employ qualitative data collection techniques in education, I found it exciting to read about a meta-analytical method that combined findings from studies within these research orientations with those from the well-established traditional forms that most often employed quantitative data strategies. Traditional research methods are not exclusively quantitative but usually transform qualitative data obtained through methods such as open question survey responses into a quantitative form. Since the development of alternative research paradigms and data analysis strategies for qualitative data, there has been extensive writing, much heat and little light on the vexing question of how to acknowledge the differing research orientations—and by that I mean chiefly constructivist/interpretivist, critical science, and post-structuralist, although there is a longer list in the Denzin and Lincoln Handbook of Qualitative Research (2000)—while coming to some general conclusion that could be useful to policy makers. Abrami, Bernard and their colleagues suggest they have a format and I congratulate them for attempting to bridge the chasm.

The most obvious next question raised for me was, Does this research categorization scheme work? Before I comment, I would like to go back to a more fundamental question. This is a review of e-learning in Canada, but what do we mean by e-learning? The authors begin with a definition;

E-learning has become the general term encompassing the application of computer technologies to education, whether it occurs in face-to-face classrooms, in blended and hybrid courses, in mediated distance education contexts or in online learning environments. (p. 9)

They go on to quote the Canadian Council on Learning’s definition as “the development of knowledge and skills through the use of information and communication technologies (ICTs) particularly to support interactions for learning—interactions with content, with learning activities and tools, and with other people.” They conclude with the note that e-learning is “not merely content-related,” (p. 9) or technology specific, and can include hybrid or blended learning. Embedded in the definition is the use of ICTs (information and communication technologies). ICT is wonderful shorthand but it defies easy definition and covers a “long list of goods and services, including older technologies. . . such as the telephone and television, and newer technologies whose functionalities are increasingly overlapping through the process of convergence” (Sciadas, 2006, p. 5).

The definition of e-learning then is a very expansive definition and while it may be useful to be broad when one is trying to identify papers for possible inclusion in the database, it raises questions about the variety of possibilities it entails. Many would contend that all learning is mediated whether through words, spoken or written, or images constructed in two or three dimensions so that the specification of ICTs adds little to the complexity of the general definition of mediated learning. Are studies on the teaching of computer skills or software programs included in this definition? What about studies on the use of PDFs (portable data formats) in science fieldwork? It must include video-based multimedia developed in language and literature courses but also learning objects used in advanced calculus courses, but does it include students using PowerPoint or keyboarding their work into a computer? All of these examples employ ICTs in pursuit of enhanced learning but in the end we are left wondering whether the granularity is so large that the findings might be meaningless in helping to provide any specific direction. Broaden the context to include health, early childhood and lifelong learning and the difficulties become even more obvious. How can we be confident about the utility in a specific context of any findings that arise?

The second part of the title references evidence, gaps and promising directions. Evidence refers to evidence of the impact of e-learning or “what predictor variables explain technology impacts?” (p. 17). Federal and provincial governments, jurisdictions and institutions have put considerable funds towards the acquisition of ICTs and the development of a comprehensive technological infrastructure in the belief that ICTs would bring efficiencies to schools and enhance learning outcomes for students. The benefits gained through computerization by business and industry include greater efficiencies through automation, greater effectiveness and more autonomy as decisions are distributed throughout the organization, better knowledge management as expertise is more widely recognized, and less hierarchical organizations with more emphasis on learning as an aspect of all members’ working lives. Yet, while there is extensive research that shows gains through the use of specific ICTs in specific situations, research into educational outcomes resulting from use of ICTs has been much more elusive and inconclusive (MacFarlane, 2005).

Levin (2005) commenting on trends in Canadian education and the “supposedly life-altering impact of information technology in schools” (p. 14) reflects frustration with the two poles of ICT belief: (i) that the mere presence of ICTs in schools will be transforming, or alternatively, (ii) they are a waste of time and money. This dichotomy of views was very common at the beginning of this decade and was one Ungerleiter and Burns (2002) sought to win by suggesting that meta-analyses of the few “credible” studies available showed no significant gains. Unfortunately the few credible studies completed prior to 2002 tended to have examined computer-assisted instruction which can of course be included in a broad definition of e-learning. The use of large-scale studies using experimental or quasi-experimental conditions can be valuable in identifying differences but they have to be representative of the larger educational landscape if they are to be useful. And that has been and continues to be the issue with e-learning studies.

As Breleux (2001) has pointed out, we tend to talk about the impact of e-learning or ICTs as if we were speaking of one entity, education, hit by another entity, technology, like a meteor coming to earth. For Breleux, technology is neither a “tool” like a hammer nor an “ingredient” to be added to education; it is an “enabler,” and part of a larger process that is transforming education. That larger context concerns the ways ICTs are changing how people live, work and communicate. This larger societal context is often not considered when doing studies of in-school use. Not only is there extensive use of ICTs in workplaces and society in general, but many students, the generation X, have grown up in a world where there has always been computerization. Students are using a variety of digital technologies outside schools, from cell phones to iPods, from portable digital games to DVDs, and from CDROMs to the Internet, that they see as familiar tools for communication, discovery and understanding, all aspects of learning. Either in-school learning provides opportunities for students to obtain the skills necessary to work in a technology-infused workplace, and incorporates the range of students’ out-of-school uses of digital technologies, or school becomes an increasingly less relevant place of learning for students. I think it is for this reason, rather than that we have resolved the issue of the impact of technology, that the question of the influences of technology on learning has shifted focus.

We often resort to Rogers’ (1995) theory of the diffusion of innovations in describing the introduction of ICTs or e-learning into education. However, while that might work if we are talking about the addition of something into an otherwise relatively unchanging system (and some would contend that the education system is very resistant to change), I find the analogy does not work well in this case. The situation is much more complex. First, this innovation requires a considerable infrastructure (and extensive political support); second it takes a comparatively long time for innovations that reflect societal changes to become embedded in schools, and third, our understanding of learning and of what we value as educational outputs are also changing.

The advent of digital technologies has been compared to the advent of television, the telephone or the car. In the case of television, this broadcast medium required an infrastructure of television stations, transmitters and satellite dishes, broadcast stations, actors, plays, advertising, news programs, series made for television, the infrastructures of lights, and cameras, announcers and commentators associated with sports events, the laws regarding rights to tape and broadcast world events, and so much more. Television not only brings the world to the home but it also creates television worlds and products that sustain it. We are only beginning to recognize how something similar is occurring through the integration of ICTs into our lives.

Internet use in Canada continues to rise. In 1994, 18% of Canadians were using the Internet. By 1999, the figure had risen to 42% (the in-school figure was 14.9%) (Dickinson & Ellison, 2000) and more recent figures suggest that 68% are regular users. In homes with school-aged children, 81% had access to the Internet (Statistics Canada, 2006). Despite the publicity that every willing school and library in Canada was connected to the Internet by 1999, the in-school use figure has always been substantially below out-of school use basically because of a lack of infrastructure and low-scale integration into teaching approaches (Statistics Canada, 2004). Aboriginal communities have generally been ignored in these calculations. The first wave of adoption focused on hardware, on wiring and on the acquisition of software. Even in schools where there is a local area network and Internet connections, the major emphasis has been on hardwiring classrooms rather than on wireless use. The development of the software consumer industry through the constant upgrading of equipment and software to develop and then meet the needs of a fast-growing market was particularly difficult for cash-poor school jurisdictions. High schools, for example, still tend to have their computers housed in labs and the library. Any teacher who wants to integrate ICTs into instruction has to book lab time and cope with the difficulties of scheduling in order to use computers for perhaps only a small portion of the class. If most teachers want to use ICTs, the system cannot accommodate them; hence teachers resort to PowerPoint and interactive whiteboard presentations where they can be in more control of their time and resources. For behind this planning, the teacher is very aware of the pressures of time and the problems technical difficulties can create. Researchers doing studies of ICT or e-learning use, therefore, have to take all these realities into account. McFarlane and her colleagues (2005) give some excellent descriptions of the difficulties of undertaking well-funded studies across schools and jurisdictions.

While there remains a concern with the impact of ICTs on education in the popular press, many in education are recognizing that there is a much greater need for global understanding, multicultural awareness and civic engagement that ever before. This is partly the result of the impact of ICTs on our understanding of concepts of nationalism and globalization, the interconnectedness of the world economies, and situations of injustice, disaster and the growth of dictatorships. The local, national and global worlds we thought education was preparing students for have shrunk and been transformed. At the same time the use of complex digitalization of the activities of the brain in the process of learning has benefited our understanding of the importance of active, problem-based and collaborative learning as what we should be doing in schools. Further, students need to be digitally literate, that is “to understand the power of images and sounds, to be able to manipulate, transform and transmit digital media and transfer them easily into other forms” (Bamford et al., 2005, p. 2). Taken together, these signal changes in society’s expectations for schooling but it takes time for these to be translated into changes in what is measured on examinations, identified as required educational outcomes in curriculum documents and taught to teachers, whether in preparation or in practice.

How teachers are actually using new technologies is not generally known, hence the large number of case studies in this area (Haughey, 2002); and, as Abrami and his colleagues point out, there are many methodological difficulties in estimating impacts when introducing new processes into an otherwise established system based on human interaction. One of the initial problems in using the “meteor” analogy was that studies sought to examine what differences could be attributed to the introduction of the use of computers in classrooms. Writing in 2004, Cox and Abbott noted that “empirical evidence on the role of ICT in educational attainment has been the Holy Grail for some researchers and many policy makers for many years” (p. 12) and conclude that context was so strong an intervening variable that it was impossible to parcel out the use of the new technologies, and that teachers’ beliefs about, comfort with and pedagogical approaches to ICT were equally important and complex intervening variables. By context, I am referring, for example, to the reasons why student achievement gains in mathematics and science courses are positive overall but not consistently so. Researchers such as Cox and Webb (2004) have identified three main reasons: hardware and software variations among schools, unequal variation in use of technology in schools; and concurrent reforms in other areas such as pedagogy or assessment that make parceling out gains attributed directly to new technology use to be nigh impossible. In addition, rapid developments in technology make longitudinal studies difficult. This raises important questions about how to document ICT-based changes through appropriate and authentic assessment processes.

The importance of teachers’ level of comfort with, attitude towards and general use of ICTs has turned out to be a major issue. Since teachers are generally very concerned with instructional outcomes and jurisdictional and parental expectations concerning grades, it is probable that any change in the classroom which would seem to be less efficient and thereby take up more time with no immediately evident gain would be unlikely to be adopted. Students also may be equally unwilling to spend time on such activities. Hence, the inclusion of ICTs as an expectation in educational outcomes is by itself likely to be insufficient to bring about change.

Taking these arguments together, the task Abrami and his colleagues have undertaken is a difficult if not impossible one. The question of ‘what is e-learning’ seems too broad to provide a suitable level of granularity; the variation involved within the term is too diverse to provide a basis for decision-making. If the topic itself is too unstable, then what about the method? Does this research categorization scheme work? Does it provide us with possibilities for bringing together the findings of studies with differing orientations?

My reading of their document suggests that they are seeing these orientations as differences in methods rather than orientation. They classify their studies using terms such as surveys, case studies and experimental studies rather than any recognition of studies which begin with different ontological or epistemological premises. I did not review their individual classifications. It was sufficient to read that of the 2042 articles they identified, they reviewed 1146, and of those included 726 in their analysis. Of those 152 were classified as primary empirical research, which would include both quantitative and qualitative studies. They subsequently narrowed these down to 17 studies which they then examined for effect size. While establishing effect sizes is a well-documented procedure, developing a single “impact” scale for the 726 is less well-developed. The process raised a number of issues for me: the translation of descriptive findings into equidistant quantitative intervals on a single scale; the combination of general public opinion, trade documents, policy documents and empirical research as equally important in identifying impacts of e-learning; the expectation that “impact” could be a generic outcome measure (even with subsets); and the interpretation of these numbers as means linked to defined outcomes.

In the end I pondered the differences between this analytical method and a good literature review. For a good literature review we expect a wide gathering of possibly relevant articles, a sifting by type, a review of methods, of conclusions and a subsequent grouping by some series of constructs so as to illuminate the reader about the topic, the issues which have already been identified by previous researchers, their limitations, and possible issues which still need exploration. Does the adding of a rating scale make the process more rigorous? I don’t believe so. My hope that we might have found an alternative way to explore the findings of both post-positivist and interpretivist or critical studies has not been confirmed by what I read. Perhaps so many of their findings seem reasonable because they followed the steps of a good literature review.

I, too, am optimistic about e-learning and ICTs. However, their development and integration with the myriad ways we teach and learn are such that it is unlikely that we will be able to measure their impact with the kind of large scale studies proposed by Abrami and his colleagues. The continuing innovation is itself a moving target and the assumption of “all other things being equal” will not hold now, if ever. Abrami and his colleagues conclude that “it is a shame to attempt innovation and not to be able to tell why it works or doesn’t work” (p. 51). This is indeed a question that informs all science, and many researchers have and are examining classroom-based effects of ICTs. Such small scale studies can be valuable—and may well explain the large numbers of qualitative case studies in the document review.

However, ICTs are also changing us; besides the obvious use of computers in the workplace, we are a more communicative society; we travel more; we make more cell phone calls, send text-messages, and increasingly use VOIP to talk to people around the world; we spend a large proportion of our time sending and reading e-mails; and we multi-task—doing any number of these at the same time. We are using the Internet to meet partners, to exchange social talk, to play games, to post our home-made videos—and all of this is in out-of-school time. How are these facets of contemporary society affecting the way we teach and learn in schools? That is what we need to consider in the short term. Otherwise schools will become increasingly irrelevant; we need to think about the objectives we have espoused for schooling—are they appropriate for soon-to-be adults living in the 21 st century? E-learning, defined broadly, is no longer an innovation except in schools. Perhaps we are asking the wrong questions.


Bamford, A., Barish, S., Braude, S., Chen, M., Johnson, L., & Woolsey, K. (Eds). (2005). A global imperative. The report on the 21 st century literacy summit. NMC: The New Media Consortium. Retrieved October 22, 2005, from

Breuleux, A. (2001). Imagining the present, interpreting the possible, cultivating the future: Technology and the renewal of teaching and learning. Education Canada, Fall 2001. Retrieved December 27, 2006 from

Cox, M., & Abbott, C. (Eds.). (2004). A review of the research literature relating to ICT and attainment. A report to the DfES. Retrieved October 22, 2005, from

Cox, M., & Webb, M. (Eds.). (2004). An investigation of the research evidence relating to ICT pedagogy. A report to the DfES. Retrieved October 22, 2005 from (See also Webb, M., & Cox, M. (2004). A review of pedagogy related to information and communications technology. Technology, Pedagogy and Education, 13(3), 235–286.)

Denzin, N., & Lincoln, Y. (2000). Handbook of Qualitative Research (2 nd ed.). Thousand Oaks, CA: Sage.

Dickinson, P., & Ellison, J. (2000). Plugging in: The increase of household Internet use continues into 1999. (Statistics Canada Catalogue no. 56F0004MIE, No. 1). Retrieved December 27, 2006 from

Haughey, M. (2002). Canadian Research on Information and Communications Technologies: A State of the Field. Prepared for the 2002 Pan-Canadian Education Research Agenda Symposium, Montreal, QC.

Levin, B. (2005). The future of Canadian education policy, Point/Counterpoint, UCEA Review, XLV (1), 13–15.

McFarlane, A. (2005). Is ICT transforming education? New technologies in traditional classrooms—what exactly is going on? Retrieved October 22, 2005 from

Rogers, E. M.  (1995).  Diffusion of innovations (4th ed). NY: The Free Press.

Sciadas, G. (2006). Our lives in digital times. Connectedness Series. Cat. No. 56F0004MIE – no. 014. Statistics Canada.

Statistics Canada (2006). Canadian Internet Use survey. The Daily, August 15. Retrieved December 27, 2006 from

Statistics Canada. (2004). Study: Connectivity and learning in Canada’s schools. The Daily, September 24. Last retrieved November 14, 2005, from

Ungerleider, C., & Burns, T. (2002). Information and communication technologies in elementary and secondary education: a state of art review. Prepared for the 2002 Pan-Canadian Education Research Agenda Symposium, Montreal, QC.

ISSN: 1499-6685