Canadian Journal of Learning and Technology

Volume 32(3) Fall / automne 2006

A Review of e-Learning in Canada: A Rough Sketch of the Evidence, Gaps and Promising Directions

Philip C. Abrami, Robert M. Bernard, Anne Wade, Richard F. Schmid, Eugene Borokhovski, Rana Tamim, Michael Surkes, Gretchen Lowerison, Dai Zhang, Iolie Nicolaidou, Sherry Newman, Lori Wozney, Anna Peretiatkowicz

Authors

Philip C. Abrami is Professor, Research Chair, and Director of the Centre for the Study of Learning and Performance (CSLP), Concordia University, Montreal, Quebec. His research interests include educational technology, the social psychology of education, and research synthesis.

Robert M. Bernard is Professor of Education at Concordia University in Montreal, Quebec, and a member of the CSLP. He is also a Co-Chair of the Education Coordinating Group of the Campbell Collaboration. His methodological expertise is in statistics and research design and meta-analysis.

Anne Wade (M.L.I.S.) is Manager and Information Specialist at the CSLP. Her expertise is in information literacy, information storage and retrieval, and research strategies. She has been a lecturer in the Information Studies Program in the Department of Education, Concordia University for over a decade; is Convenor of Campbell’s Information Retrieval Methods Group; a member of Campbell’s Education Coordinating Group and an Associate of the Evidence Network, UK. Wade has worked and taught extensively in the field of information sciences for twenty years.

Richard F. Schmid is Professor and Associate Director of the CSLP. His research interests include instructional design, learning sciences, and the uses of technology in postsecondary education.

Evgueni Borokhovski is currently completing his Ph.D. in Experimental Psychology at Concordia University, in Montreal, Canada. He is also a senior research assistant for the CSLP. His area of research interests includes cognitive mechanisms of language learning, educational psychology, and methodology of systematic reviews and meta-analyses, in particular.

Rana Tamim is a Ph.D. Candidate in Educational Technology at Concordia University. She is a part-time faculty member in the Department of Education and a research assistant at the CSLP. Her research interests focus on the role played by computer technology in facilitating learning. Other interests include science education, generative learning strategies, and meaningful learning.

Michael A. Surkes is a doctoral student in Educational Technology at Concordia University in Montreal, Quebec, Canada. His academic qualifications include degrees in physiological psychology, experimental psychology and philosophy, and his research concerns metacognition, metamotivation, and the development of complex and coherent conceptual frameworks.

Gretchen Lowerison is currently completing her Ph.D. in Educational Technology at Concordia University in Montreal, Canada. She is a part-time faculty member in the Department of Education and a research assistant for the CSLP. Her research interests focus on the role that computer technology plays in facilitating learning in formal and informal learning environments with respect to self-regulation, learning strategies, intrinsic motivation and perceived-competence.

Dai Zhang is a Ph.D. Candidate in Educational Technology at Concordia University. She is the information technology educational adviser in John Abbott College and a research assistant at the CSLP. Her research interests focus on collaborative instructional design, E-learning system design and distance learning.

Iolie Nicolaidou is a PhD student in the Educational Technology program at Concordia University and a member of the CSLP. Her research interest is in the effect of digital process portfolios on elementary school students’ writing skills and self-efficacy.

Sherry Newman is completing her Master’s degree in Educational Technology at Concordia University where she worked as a research assistant for the CSLP. She now works for the World University Service of Canada in Ottawa. Her research interests lie in the areas of distance education, creative photography applications for educational purposes and systematic reviews.

Lori Wozney is a PhD candidate in the Educational Technology program at Concordia University. She has worked as a research assistant in several systematic reviews undertaken by the CSLP. Lori’s research experiences and interests focus on supporting evidence-based practice in the classroom and designing instruction to support self-regulated learning.

Anna Peretiatkowicz is an undergraduate student in the Faculty of Arts and Science at Concordia University, majoring in Linguistics. She is also a library assistant at the Centre for the Study of Learning and Performance.

Abstract

Abstract: This review provides a rough sketch of the evidence, gaps and promising directions in e-learning from 2000 onwards, with a particular focus on Canada. We searched a wide range of sources and document types to ensure that we represented, comprehensively, the arguments surrounding e-learning. Overall, there were 2,042 entries in our database, of which we reviewed 1,146, including all the Canadian primary research and all scholarly reviews of the literature. In total, there were 726 documents included in our review: 235 – general public opinion; 131 – trade/practitioners’ opinion; 88 – policy documents; 120 – reviews; and 152 – primary empirical research. The Argument Catalogue codebook included the following eleven classes of variables: 1) Document Source; 2) Areas/Themes of e-learning; 3) Value/Impact; 4) Type of evidence; 5) Research design; 6) Area of applicability; 7) Pedagogical implementation factors; 8) A-priori attitudes; 9) Types of learners; 10) Context; and 11) Technology Factors.

We examined the data from a number of perspectives, including their quality as evidence. In the primary research literature, we examined the kinds of research designs that were used. We found that over half of the studies conducted in Canada are qualitative in nature, while the rest are split in half between surveys and quantitative studies (correlational and experimental). When we looked at the nature of the research designs, we found that 51% are qualitative case studies and 15.8% are experimental or quasi-experimental studies. It seems that studies that can help us understand “what works” in e-learning settings are underrepresented in the Canadian research literature.

The documents were coded to provide data on outcomes of e-learning (we also refer to them as “impacts” of e-learning). Outcomes/impacts are the perceived or measured benefits of e-learning, whereas predictors are the conditions or features of e-learning that can potentially affect the outcomes/impacts. The impacts were coded on a positive to negative scale and included: 1) achievement; 2) motivation/satisfaction; 3) interactivity/ communication; 4) meeting social demands; 5) retention/attrition; 6) learning flexibility; and 7) cost. Based on an analysis of the correlations among these impacts, we subsequently collapsed them (all but cost) into a single impact scale ranging from –1 to +1. We found, generally, that the perception of impact or actual measured impact varies across the types of documents. They appear to be lower in general opinion documents, practitioner documents and policy making reports than in scholarly reviews and primary research. While this represents an expression of hope for positive impact, on the one hand, it possibly represents reality, on the other. Where there were sufficient documents to examine and code, impact was high across each of the CCL Theme Areas. Health and Learning was the highest, with a mean of 0.80 and Elementary/Secondary was the lowest, with a mean of 0.77. However, there was no significant difference between these means.

The impact of e-learning and technology use was highest in distance education, where its presence is required (Mean = 0.80) and lowest in face-to-face instructional settings (Mean = 0.60) where its presence is not required. Network-based technologies (e.g., Internet, Web-based, CMC) produced a higher impact score (Mean = 0.72) than straight technology integration in educational settings (Mean = 0.66), although this difference was considered negligible. Interestingly, among the Pedagogical Uses of Technology, student applications (i.e., students using technology) and communication applications (both Mean = 0.78) had a higher impact score than instructional or informative uses (Mean = 0.63). This result suggests that the student manipulation of technology in achieving educational goals is preferable to teacher manipulation of technology.

In terms of predictor variables (professional training, course design, infrastructure/ logistics, type of learners [general population, special needs, gifted], gender issues and ethnicity/race/religion/aboriginal, location, school setting, context of technology use, type of tool used and pedagogical function of technology) we found the following: professional development was underrepresented compared to issues of course design and infrastructure/ logistics; most attention is devoted to general population students, with little representation of special needs, the gifted students, issues of gender or ethnic/race/religious/aboriginal status; the greatest attention is paid to technology use in distance education and the least attention paid to the newly emerging area of hybrid/blended learning; the most attention is paid to networked technologies such as the Internet, the WWW and CMC and the least paid to virtual reality and simulations. Using technology for instruction and using technology for communication are the two highest categories of pedagogical use.

In the final stage, the primary e-learning studies from the Canadian context that could be summarized quantitatively were identified. We examined 152 studies and found a total of 7 that were truly experimental (i.e., random assignment with treatment and control groups) and 10 that were quasi-experimental (i.e., not randomized but possessing a pretest and a posttest). For these studies we extracted 29 effect sizes or standardized mean differences, which were included in the composite measure.

The mean effect size was +0.117, a small positive effect. Approximately 54% of the e-learning participants performed at or above the mean of the control participants (50 th percentile), an advantage of 4%. However, the heterogeneity analysis was significant, indicating that the effect sizes were widely dispersed. It is clearly not the case that e-learning is always the superior condition for educational impact.

Overall, we know that research in e-learning has not been a Canadian priority; the culture of educational technology research, as distinct from development, has not taken on great import. In addition, there appears to have been a disproportionate emphasis on qualitative research in the Canadian e-learning research culture. We noted that there are gaps in areas of research related to early childhood education and adult education. Finally, we believe that more emphasis must be placed on implementing longitudinal research, whether qualitative or quantitative (preferably a mixture of the two), and that all development efforts be accompanied by strong evaluation components that focus on learning impact. It is a shame to attempt innovation and not be able to tell why it works or doesn’t work. In this sense, the finest laboratories for e-learning research are the institutions in which it is being applied.

Implications for K-12 Practitioners

Implications for Post-Secondary

Implications for Policy Makers

Résumé:

Cette étude préliminaire présente, dans ses grandes lignes, les résultats de recherche, les lacunes et les directions prometteuses observés en matière d’apprentissage électronique depuis 2000, l’accent étant mis tout particulièrement sur la situation au Canada. Notre recherche a porté sur un vaste éventail de sources et de documents, en vue d’englober l’ensemble des points de vue sur l’apprentissage électronique. Au total, notre base de données comportait 2 042 entrées. De ce nombre, 1 146 ont fait l’objet d’un examen, dont l’ensemble des travaux de recherche primaire et des analyses documentaires effectués au Canada. Notre étude a porté sur 726 documents : 235 faisaient état de l’opinion du public en général; 131 portaient sur l’opinion des gens du métier et praticiens; 88 étaient des documents d’orientation; 120 étaient des analyses documentaires; 152 consistaient en des recherches empiriques de base. Les divers points de vue ont été catalogués selon les onze catégories de variables suivantes : 1) source du document; 2) domaines/thèmes de l’apprentissage électronique; 3) valeur/effet; 4) type de résultat; 5) méthodologie de la recherche; 6) domaine d’application; 7) facteurs de mise en œuvre sur le plan pédagogique; 8) attitudes a priori; 9) types d’apprenants; 10) contexte; 11) facteurs technologiques.

Les données ont été analysées selon plusieurs perspectives, notamment leur valeur qualitative en tant que résultats. Dans les travaux de recherche primaire, nous avons examiné les méthodologies de recherche utilisées. De cet examen, il est ressorti que plus de la moitié des études menées au Canada sont de nature qualitative, tandis que le reste est divisé en deux parties d’égale importance : sondages et études quantitatives (corrélationnelles et expérimentales). L’analyse de la nature des méthodologies de recherche a révélé que 51 % étaient des études de cas qualitatives et 15,8 % des études expérimentales ou quasi-expérimentales. Il semblerait donc que les études susceptibles de nous aider à mieux comprendre « ce qui fonctionne » dans un contexte d’apprentissage électronique sont sous-représentées dans la recherche faite au Canada.

Chacun des documents a reçu un code afin de faciliter l’extraction des résultats de l’apprentissage électronique (aussi appelés « effets » de l’apprentissage électronique). Les résultats/effets sont les avantages perçus ou mesurés de l’apprentissage électronique. De leur côté, les variables explicatives sont les conditions ou les caractéristiques de l’apprentissage électronique susceptibles d’influer sur les résultats/effets. Les effets ont été codés selon une échelle allant du positif au négatif, comprenant les éléments suivants : 1) réalisation; 2) motivation/satisfaction; 3) interactivité/communication; 4) réponse aux demandes sociales; 5) maintien aux études/perte d’effectifs; 6) souplesse d’apprentissage; 7) coût. Suivant une analyse des corrélations entre ces effets, nous avons ensuite réduit ces catégories (sauf les coûts) à une seule échelle d’effets allant de –1 à +1. Il en est généralement ressorti que la perception des effets ou leur mesure réelle varient d’un type de documents à l’autre. Leurs valeurs semblent être inférieures dans les documents exprimant l’opinion générale, dans ceux provenant des praticiens et dans les rapports portant sur l’établissement de politiques, que dans les études universitaires et les recherches primaires. Même si, d’un côté, cela exprime l’espoir d’effets positifs, cela traduit peut-être aussi, de l’autre, la réalité. Là où nous disposions de suffisamment de documents à analyser et à coder, la valeur des effets s’est révélée supérieure dans chacun des domaines des thèmes du CCA. Les valeurs relatives au domaine de la santé et de l’apprentissage étaient les plus élevées, affichant une moyenne de 0,80, tandis que celles relatives à l’apprentissage de niveau « élémentaire/secondaire » étaient les plus basses, affichant une moyenne de 0,77. Il n’existe cependant pas de différence significative entre ces moyennes.

Les valeurs attribuées aux effets de l’apprentissage électronique et de l’usage des technologies étaient plus élevées en éducation à distance, là où l’apprentissage électronique est requis (moyenne = 0,80) que dans les contextes d’enseignement présentiel, là où l’apprentissage électronique n’est pas requis (moyenne = 0,60). Les technologies fondées sur des réseaux (p.ex., Internet, le Web, la communication assistée par ordinateur) affichaient des valeurs plus élevées (moyenne = 0,72) que les technologies d’intégration ordinaires dans les milieux éducatifs (moyenne = 0,66), même si cette différence peut être considérée comme négligeable. Élément intéressant : parmi les applications pédagogiques des technologies, celles faites par des élèves (c.-à-d. l’utilisation des technologies par les élèves) et les applications de communication (avec une moyenne, dans les deux cas, de 0,78) enregistraient des valeurs plus élevées que les utilisations éducatives et informatives (moyenne = 0,63). Ces résultats suggèrent que le maniement des technologies par les élèves pour l’atteinte de leurs objectifs éducatifs est préférable au maniement des technologies par le personnel enseignant.

En ce qui concerne les variables explicatives (formation professionnelle, conception du cours, infrastructure/logistique, type d’apprenants [population générale, besoins spéciaux, individus doués], variables associées au sexe, à l’ethnicité, à la race, à la confession religieuse et au statut d’autochtone, situation géographique, milieu scolaire, contexte entourant l’utilisation des technologies, types d’outils utilisés et fonction pédagogique des technologies), les faits suivants sont ressortis : les données sur le perfectionnement professionnel ont été sous-représentées par rapport aux questions liées à la conception des cours et à l’infrastructure/logistique. On a accordé plus d’attention à la population étudiante générale qu’aux personnes ayant des besoins spéciaux, aux élèves doués, ou aux questions liées au sexe, à l’ethnicité, à la race, à la confession religieuse et au statut d’autochtone. On a accordé plus d’attention à l’utilisation des technologies à des fins de formation à distance qu’au domaine émergent de l’apprentissage hybride/combiné. On a accordé plus d’attention aux technologies basées sur des réseaux, comme Internet, le Web et la communication assistée par ordinateur, qu’à la réalité virtuelle et aux simulations. Enfin, les deux catégories d’usage pédagogique des technologies les plus populaires sont l’utilisation à des fins de formation et de communication.

Nous avons déterminé les études primaires sur l’apprentissage électronique menées dans le contexte canadien qui étaient susceptibles d’être résumées quantitativement. Parmi les 152 études examinées, 7 ont été jugées véritablement expérimentales (randomisation avec groupe expérimental et groupe témoin), et 10 quasi expérimentales (pas de randomisation, mais tenue d’un prétest et d’un post-test). Pour ces études, nous avons calculé 29 ampleurs de l’effet ou différences moyennes normalisées, qui ont été intégrées à la mesure composée.

L’ampleur moyenne de l’effet a été évaluée à + 0,117, ce qui est peu élevé. Environ 54 % des participants à l’apprentissage électronique ont fait aussi bien, sinon mieux, que les membres du groupe témoin (50 e percentile), un avantage de 4 %. Cependant, la valeur de l’hétérogénéité est importante, ce qui révèle une grande dispersion des ampleurs de l’effet. Il est donc clair que l’apprentissage électronique n’est pas toujours une condition qui maximise l’impact sur l’éducation.

Dans l’ensemble, il est clair que, au Canada, la recherche sur l’apprentissage électronique n’est pas une priorité. La recherche sur les technologies éducatives, à distinguer de celle sur le développement technologique, a été laissée pour compte. De plus, il semblerait que, dans le milieu canadien des recherches sur l’apprentissage électronique, on se soit concentré de façon disproportionnée sur la recherche qualitative. En effet, nous avons constaté des lacunes dans les domaines de la recherche portant sur l’éducation de la petite enfance et l’éducation des adultes. Enfin, nous sommes d’avis qu’une plus grande importance doit être accordée à la mise en œuvre de recherches longitudinales, de type qualitatif ou quantitatif (de préférence une combinaison des deux), et que les efforts de développement doivent être étayés de composants d’évaluation solides mettant l’accent sur les effets de l’apprentissage. Il serait en effet dommage de stimuler l’innovation sans savoir pourquoi cela fonctionne ou ne fonctionne pas. Vus sous cet angle, les meilleurs laboratoires d’apprentissage électronique sont les établissements où l’on met en pratique cette démarche.

Implications pour les praticiens de la maternelle au secondaire 5 (12 e année)

Implications pour l’éducation postsecondaire

Implications pour les décideurs

A review of e-learning in Canada: A rough sketch of the evidence, gaps and promising directions: Introduction

E-learning has become the general term encompassing the application of computer technologies to education, whether it occurs in face-to-face classrooms, in blended and hybrid courses, in mediated distance education contexts or in online learning environments. The Canadian Council on Learning (CCL) defines e-learning as the development of knowledge and skills through the use of information and communication technologies (ICTs) particularly to support interactions for learning—interactions with content, with learning activities and tools, and with other people. It is not merely content-related, not limited to a particular technology and can be a component of blended or hybrid learning (Rossiter, 2002, also 2005 in an address at the CCL Workshop on E-Learning). The CCL recognizes the breadth of the concept of learning through electronic means, as well as its growing pervasiveness in Canadian institutions of formal learning, at the level of elementary and secondary schools and of colleges, and universities, as well as in early childhood and health-related learning.

Enthusiasm for, as well as apprehension regarding, the use of e-learning appears widespread as we herald the arrival of the Information Age. To some, e-learning can be used as a powerful and flexible tool for learning (Hannafin & Land, 1997; Harasim, Hiltz, Teles & Turoff, 1995; Lou, Abrami & d’Apollonia, 2001; Scardamalia & Bereiter, 1996). Proponents of the application of electronic technologies to education have long argued that computers possess the potential to transform learning environments and improve the quality of the learning that results (e.g., Bransford, Brown & Cocking, 2000; Jonassen, Howland, Moor & Marra, 2003; Kuh & Vesper, 2001; McCombs, 2000, 2001; Siemens, 2005; WBEC, 2000). Possible means include: increasing access to information (Bransford et al. 2000); providing access to a richer learning environment (Bagui, 1998; Brown, 2000; Caplan, 2005; Craig, 2001); making learning more situated (Bransford et al. 2000); increasing opportunities for active learning and inter-connectivity (Laurillard, 2002; Shuell & Farber, 2001; Yazon, Mayer-Smith & Redfield, 2002); enhancing student motivation to learn (Abrami, 2001); and increasing opportunities for feedback (Jonassen et al. 2003; Laurillard, 2002). A recently completed study (Lowerison, Sclater, Schmid & Abrami, 2006) points to the role of technology as a catalyst for change, supporting the learning process through course design and motivation. Indeed, there is sufficient optimism for technology’s positive impact that governments have established committees, formed task forces, and dedicated substantial funds to the delivery or enhancement of technology-based instruction (CMEC, 2001).

There has also been scepticism (Clark & Sugrue, 1995; Cuban, Kirkpatrick & Peck, 2001; Healy, 1998; Noble, Sheiderman, Herman, Agre, & Denning, 1998; Russell, 1999) about the use of technology to improve learning, including suggestions that it represents a threat to formal education, from kindergarten through university. For example, it may create an imbalance between computer skills and essential academic and thinking skills, foster technology dependencies and isolation rather than independent and interdependent learners, and/or erode the joy and motivation to learn, replacing it with frustration because of failed equipment, etc. Some teachers hold beliefs concerning the usefulness of information and communication technologies (ICT) that parallel their attitudes towards any change to teaching and learning, be it through government mandated reform or societal pressure. “If the computer can accomplish the task better than other materials or experiences, we will use it. If it doesn’t clearly do the job better, we will save the money and use methods that have already proven their worth” (Healy, 1998, p. 218). Even moderate commentators such as Clark and Sugrue point out that the most likely explanation for increased learning with computer technology is instructional method differences, content differences, or novelty effects, and not the technology itself.

On June 6, 2005, MacLean’s, Canada’s weekly newsmagazine, featured an article on e-learning. Emblazoned across the cover was the title: “How computers make our kids stupid.” The article was one reporter’s attempt to weave together personal anecdotes, stories, and observations with a smattering of selective research evidence. The tale she told was compelling and generated instant and widespread reaction. Certainly the exposé resulted in the publisher’s goal—selling magazines. But what about the evidence used in the report? Was it accurate and complete? Did it provide a balanced portrait of the evidence, the important questions, the known, and the unknown? It certainly did not.

Researchers face a similar challenge when they have to sift through the accumulated evidence to form an accurate picture about scientific knowledge. Results vary, often from extremely positive to extremely negative, and it is a challenging task to synthesize complex and detailed evidence succinctly, accurately, and meaningfully. Increasingly, this same challenge exists for policy-makers, practitioners, and the public in a multitude of areas related to the social sciences and education.

Several years ago Ungerleider and Burns (2002), reviewing mostly Canadian research, found little methodologically rigorous evidence of the effectiveness of e-learning in promoting achievement, motivation, and metacognitive learning, or on facilitating instruction in content areas in elementary and secondary schools. Ungerleider and Burns also emphasized that student academic achievement does not improve simply as a result of having access to computers in the classroom without concurrent changes to instruction.

Does the application of electronic technology, under the general rubric of e-learning, represent a positive development in formal educational settings that should be embraced and pursued, or is it a danger to the achievement of the general societal goal of an educated and skilled populace? Does it actually “make our kids stupid”? There is little doubt about the ongoing and future need for a population that is skilled in the use of computers, that is, learning about technology—but what about learning through technology? The answer to this question is complex, partly because of the vast array of technologies (both hardware and software) that are available for e-learning and their many uses. However, this complexity is magnified when one considers the variety of contexts, learners, subject matters, etc. that make up the fabric of formal learning in Canadian schools and universities.

Our review is an up-to-date, comprehensive examination of e-learning with a special emphasis on Canadian research. Because Canada is investing heavily in the uses of technology for learning, both in classrooms across the nation and at a distance, this project will specifically address the need for up-to-date and comprehensive answers to practical questions that are required by decision-makers at the local, provincial and national levels, and by practitioners who are involved in the day-to-day delivery of quality educational experiences to Canada’s next generation of informed citizens.

We will address three objectives: 1) identify the most effective practices and procedures to promote learning; 2) identify major gaps in our knowledge and understanding of e-learning; and 3) identify the most promising lines of inquiry for addressing those gaps. Our review will be different than a traditional review of primary research. Instead, our review will be much broader and incorporate arguments from a number of sources. Such a catalogue of arguments will provide a comprehensive picture of the state of a field from the perspectives of research, policy and practitioner orientations. It will allow an exploration of what works (best practices), an assessment of what is incomplete in the literature, and an informed vision of promising lines of new research. Therefore, the expected outcomes of the proposed literature review include: 1) a description of the major issues facing Canadian educators and policy-makers in regards to the application of e-learning in educational and health institutions and as an avenue for preparing children and adults for lifelong learning; 2) an assessment of the effectiveness of the technologies being harnessed for educational purposes; and 3) identification of the factors that both enhance and detract from the effects of e-learning. We will explore the following sub-areas and themes: Adult Learning; Early Childhood Learning; Elementary and Secondary School Learning; Health and Learning; and Postsecondary Learning.

Approaches and Methodology

This literature review consists of a systematic compilation of current public, policy-making, practitioner and scholarly opinions and positions on the topic of e-learning in the five theme areas. In other words, it goes beyond the usual review of research, limited to primary evidence, by including multiple sources of information, to insure that a broad base of arguments and opinions about e-learning is represented, and to help situate the primary evidence and the scholarly reviews of the evidence. It also includes, therefore, an analysis of news media, practitioner opinions, and policy documents. We call this compilation of evidence and opinions from a variety of sources an “Argument Catalogue.” The Argument Catalogue codebook provides categories and values that can be applied across both constituencies and theme areas in order to classify conjectures, opinions, concerns, and expert judgments, The Argument Catalogue helps frame the questions and inform the analyses in order to know what generalizations may be drawn from the existing empirical evidence, what gaps exist, and what may be fruitful avenues for further research.

The value of an Argument Catalogue includes being able to identify the consistencies and inconsistencies that exist between research evidence, practitioner experience, and public perception. For example, do different sources agree about the impact and value of e-learning? Identifying similarities and differences among these sources may help us understand better how to mobilize knowledge to impact policy and practice. Another value of an Argument Catalogue is developing a comprehensive understanding of the issues surrounding e-learning including its impacts, applications, and the pedagogical, technical, and situational factors which moderate its effectiveness and efficiency. The combination of multiple sources of expertise and evidence—scholarly evidence and understanding from the field—may strengthen or qualify conclusions from any single source and help identify gaps in understanding, offering promising new directions for research. In other words, an Argument Catalogue attempts to provide a comprehensive and inclusive framework for understanding e-learning by giving voice to all the key constituencies who generate and apply what has been learned.

The standards which we strove to achieve in this review included: being comprehensive and current about the documents we reviewed; undertaking a review which was repeatable by others and used a transparent review methodology; insuring the outcome of the review was clear, the findings were useful, and the results were credible; and finally, using a methodology that was precise, so that our conclusions would be accurate. We sought to be comprehensive and current in order to insure that we had included the widest range of evidence available to date. To meet this objective required that we search, retrieve and code a large body of evidence. We especially wanted to avoid drawing final conclusions from an insufficient and unrepresentative collection of evidence. We sought to use methods that were repeatable and transparent by outlining each of the steps we followed in the review, so that other reviewers could replicate our methods. We also wanted the review to be clear and useful; we hope that its inclusive nature helps to insure that our readers find the results to be understandable and credible. Finally, we wanted our review to be precise and accurate; however, time constraints and the scope of the review have limited the extent to which we achieved this objective. The breadth of the review conflicted with the extent to which we could explore documents in depth during the time available for the compilation of this report. Accordingly, we have called this a rough sketch of the state of the field.

Locating the Documents

A systematic search of the literature was conducted on current public, policy-making, practitioner and scientific opinions and positions related to the topic of e-learning in the following theme areas: Adult Learning; Early Childhood Learning; Elementary and Secondary Learning; Health and Learning; and Postsecondary Learning. Specifically, literature from the last five years (2000–2005) for the following areas was located: the policy and decision makers’ perspective (e.g., policy papers, grey literature); public perception (e.g., national and provincial media); practitioners’ perspectives (trade periodicals); theoretical and empirical reviews by scholars; and primary studies conducted in Canadian contexts.

In the first four instances, emphasis was placed on locating French and English Canadian literature, although international sources were also consulted. In the last instance, only primary studies (quantitative and qualitative) on e-learning that were conducted in Canada were located.

Retrieval tools used : The following databases and web-based sources were consulted to locate relevant items within each sector.

Additionally, manual searches of selected journals were performed: Canadian Journal of Learning and Technology, Journal of Distance Education, and the International Review of Research in Open and Distance Learning. Finally, branching (scanning of bibliographic references at the end of selected articles) was used.

Strategy : Search terms were selected from the following definition of e-learning: “E-learning is the development of knowledge and skills through the use of information and communication technologies (ICTs) to support interactions for learning—interactions with content, with learning activities and tools, and with other people” (Rossiter, 2002; 2005).

A variety of broad terms related to the use of technology in education were used. These included electronic learning, distance education, distance learning, online instruction, multimedia instruction, online courses, web-based learning, virtual classrooms, computer mediated communication, computer-based instruction, computer-assisted instruction, “technology uses in education,” telemedicine, and e-health.

Specific applications of technology (e.g., learning objects, digital portfolios, specific instructional software, etc.) were not used as keywords or descriptors within individual searches, as this would have significantly widened the scope of retrieval. Note, however, that in some cases retrieved items did describe specific applications of technology.

Controlled vocabulary terms were used whenever possible, thus the actual terms used varied according to the field (education, psychology, medicine) and sector (e.g., media vs. scholarly) searched and the controlled vocabulary used within a particular retrieval tool. Please consult the CSLP website at http://doe.concordia.ca/cslp/CanKnow/eLearning.php for a complete list of the search tools consulted, along with the original search strategies for each.

Results : Overall, there were 2,042 articles in our database identified through the searches. Because of time limitations, of these we reviewed 1,146, including all the Canadian primary research and all scholarly reviews of the literature that were retrieved. Of these, 726 documents were included for further analyses:

152 – primary empirical research The 420 excluded documents were rejected for the following reasons:

The CSLP website at http://doe.concordia.ca/cslp/CanKnow/eLearning.php provides a complete list of the 726 citations.

Coding the Documents

An Argument Catalogue codebook was developed to help the reviewers document our research process; this document was used as the basis for coding the articles. The codebook can be found at the CSLP website at http://doe.concordia.ca/cslp/CanKnow/eLearning.php

The Argument Catalogue codebook was developed by taking a sample of documents of various types to insure that the major issues covered by the documents were reflected in the codebook. The intent was to insure that the codebook reflected the important topics raised in the different literatures. A combination of closed-ended and open-ended (i.e., emergent) coding was employed. The documents from each of the five theme areas were subsequently coded using the codebook.

The Argument Catalogue codebook includes the following classes of variables (the list below is not exhaustive but deals with those categories on which the most emphasis was placed in the analyses):

  1. Document Source (i.e., general public opinion, trade/practitioner’s position, policy-making reports, scholarly/academic reviews, and primary empirical research)
  2. Areas/Themes of e-learning (i.e., Adult Learning, Early Childhood Learning, Elementary and Secondary School Learning, Postsecondary Learning, Health and Learning)
  3. Value/Impact
  4. Achievement
  5. Motivation/satisfaction/attitudes
  6. Interactivity/communication
  7. Meeting social demands
  8. Attrition/retention
  9. Learning flexibility/accessibility
  10. Cost
  11. Type of evidence (i.e., opinion, survey, quantitative data, qualitative data
  12. Research design (i.e., pre-experimental, correlational, quasi-experimental, true experiment/RCT)
  13. Area of applicability (i.e., International, American, Canadian, provincial, local)
  14. Pedagogical implementation factors such as:
  15. A-priori attitudes (not analysed; insufficient cases)
  16. Types of learners
  17. Context
  18. Technology Factors

The coding of the documents was done by a team of eight graduate students, most of whom had previous experience coding studies for other systematic reviews. The team met weekly with the faculty supervisors to review progress and to address questions and concerns. Because of the volume of the documents to be coded, and because of the limited timeframe, no article was coded by more than one student; therefore no measure of inter-rater reliability could be calculated. To compensate for this, preliminary meetings were conducted in order to reach common understandings of the meanings of the coding categories and values; several [5-10] documents from each category were coded by all team members, and differences were resolved through discussion.

Data Entry and Analysis

Each coder entered data into a Microsoft Excel spreadsheet that included both open-ended and closed-ended category values. The separate Excel files were combined to form a master database, which was then converted to an SPSS data file. The data were spot checked for accuracy and corrected prior to the main analyses.

Results

The Results are divided into five sections: Sample of Document Conclusions; Analysis of the Documents Retrieved and Coded; Frequencies of Predictor Variables and Impacts of E-Learning; What Predictor Variables Explain Technology Impacts? and Quantitative Summary of the Canadian Primary Research on E-learning.

Sample Of Document Conclusions

The team reviewed all open-ended (emergent) coding intended to identify, when possible, the authors’ principal positions, arguments presented, or conclusions reached, in every document. Mirroring the procedure for coding in general, preliminary meetings were conducted to assure the common understanding of the task, after which one coder concentrated on the analyses of each type of literature source. The most salient, interesting, informative, or powerful message or the most representative or frequently appearing message in each document was extracted. These summaries appear below organized by the source of evidence.

General public opinion:

One of the major messages found in the general public opinion literature is that e-learning is a rapidly growing field in education. There is speculation that traditional Canadian universities will come under increased pressure to expand their online offerings because of growing enrolments (“Online Courses Grow in Popularity”, 2003). Online courses, and institutions such as Athabasca, are viewed as possible solutions to the problems of growing enrolments and limited physical space on traditional campuses (Amiot, 2001; Rodrigue, 2001; Sinnema, 2005; Summerfield, 2000; Wanless, 2000).

Similarly, there is the perception that e-learning can provide greater access to educational programs for individuals who otherwise would not be able to participate in them. Computer-based distance education is viewed as a means for learners restricted by time or distance to engage in continuing education. In Timiskaming, elder Jenny Enair’s opinion, “For anyone who is motivated to learn, but must care for an older parent or young children at home, distance learning is truly a godsend,” (“Contact North Enjoys High Enrolment in Tri-Towns”, 2002)—a position shared by many others (Foster, 2002; Hamilton, 2002; “Online Learning Opens Up Options: Computers Bring School to Busy Lives”, 2003).

There are, however, two prevalent concerns about e-learning. The first is its high costs and the potential risk that money will be diverted away from more appropriate or urgent projects to fund e-learning. One article in the Toronto Star informed readers that, despite the potential benefits of having education available to all Canadians, the cost of establishing nation-wide high-speed internet access could cost four billion dollars (“Internet for All”, 2001). Another reported on the fear that computer-based work teaches skills that may become obsolete at the expense of time-tested skills, such as critical thinking (“Classroom Computers Get Failing Grade: Fixing High-Tech Tools Keeps Teachers From Their Primary Tasks”, 2000; “Technology v Textbooks: Costly Computer Systems Compromise Other Essential Teaching Needs”, 2001; Thibodeau, 2001).

The second concern is that e-learning might have a negative impact on the development of children’s creative skills. Some educators, while seeing benefits to the use of technology by young children, caution that such use must be guided. “It has to do with how you use a powerful tool … [you] can’t just sit them in front of a computer and say: go to it” one expert is quoted as saying (“Computers and the Young: They Can Impede Development, One Group of Experts in the U.S. Says. Meanwhile, Quebec Is Forging Ahead With Its New-Technology School Curriculum”, 2000). This concern can also be found expressed in “The Birth of the Celtic Tiger” (2003), and Kelly, Lord and Marcus (2000, September 25).

Finally, there is the belief that, despite the growth of distance education and other forms of technology-based learning, teachers and classrooms will remain a necessary part of the education system. Teachers’ ability to communicate with students and to facilitate their learning is still highly valued (Dibbon, 2000; Wake, 2000; Wood, 2000).

Practitioners’ point of view:

Practitioners are mostly positive about the development of E-learning. They believe that e-learning brings more opportunities and flexibility to learning. Advanced technologies play an important role in reaching out to and connecting with students who live in remote areas, who are homebound, and who have the demand for life-long learning (Crichton & Kinsel, 2000; Haddow, 2003; Hillman, 2000; Johnston & Mitchell, 2000; Oakley & Stevens, 2000). Another message is that e-learning, to some extent, enhances self-regulated learning (Hadwin & Winne, 2001).

One of the major concerns emerging from trade and practitioners’ points of view is that e-learning requires careful attention to instructional design and planning (Bell, McCoy, & Peters, 2002; Hofmann & Dunkling, 2002; McKenzie, 2002; Nelson, 2001;). Technologies provide lots of learning potential but they do not guarantee learning. As Peter Swanneell, the Vice-Chancellor of the University of Southern Queensland stated, “Without a new pedagogy, the technology will fail” (cited by Hofmann & Dunkling, 2002). We need a proper instructional plan before we integrate technology in the class. However, some educators note that there are not enough financial means for providing such instructional support. Although professional training and support is crucial for the success of e-learning, they may suffer from competition with e-learning technologies for financing (Lewis & Jenson, 2001; Tuinman, 2000). In a way, the emergent technologies diverting substantial financial resources from other components of educational infrastructure force us to reconsider our priorities. Too much funding goes towards the purchase, installation, and maintenance of technology per se and less for teacher support and professional training. The message delivered is very clear: we need to invest more on the pedagogical side of the integration of technology in classrooms.

The development of e-learning requires universities to have new policies and strategies to meet the emerging social demand brought by advanced technologies (Fielden, 2001; Shapiro, 2002). The implementation of new technologies exposes us to a global market, makes us collaborate with our competitors, and challenges the traditional monopoly over the accreditation and certification, while bringing to the picture new sources of revenue. The change starts from the integration of technology in classroom and spreads to the whole education system.

Policy documents:

The overall perceptions of technology use in education by the policy making community are favourable. The HORIZON Report (New Media Consortium, 2004) contains prognoses of what technologies will become important in higher education in the next one to five years. Several papers emphasize the role Canada should assume in implementing technology-based approaches to education (Industry Canada, Advisory Committee for Online Learning, 2001; Rossiter, 2002) and why it is both possible and necessary. Finally, Davis and Carlsen (2004) named four major reasons to invest in ICT in education: 1) ICT is an imperative for economic competitiveness; 2) ICT increases educational attainment; 3) ICT increases access to education; and 4) ICT is a catalyst promoting changes in education.

One of the major messages from the policy documents is about the need to bridge the gap between theory, research and practice (Kinuthia, 2004; Mühlhauser, 2004; Pittard, 2004). There is a concern, however, about how exactly it is done. According to Oliver and Conole (2003), the use of evidence-based practice in policy making can be criticized on methodological, epistemological, and moral grounds: “While the aim of evidence-based practice—linking research, practice and policy—is valuable, its methods are questionable” (p. 394). Among the problems with this ideology they mentioned “inappropriate dismissal of qualitative approaches,” and “disempowering practitioners” by forcing them into a tradition “that blends evaluation, research and practice together” (p. 394).

Policy documents have also generated a substantial number of “conditional” messages, specifying what conditions should be met, what factors taken into account, and what approaches implemented in order to make e-learning really efficient. Harley (2002) emphasized the need to know when and how to get involved in distance education. Merisotis (2001) named the following benchmarks for success in Internet-based distance education: the need for learning outcomes—not the availability of existing technology—to be the determinates of whether and how the technology should be used to deliver course content; the necessity of preliminary information and technical assistance for students and faculty to facilitate their way into and through technology-based courses; properly documented and accessible course standards and requirements, as well as security measures; account for individual differences in learners personalities and motivation, etc. This list matches closely the conditions necessary for successful use of technology in teacher education produced by the UNESCO Division of High Education (2002): consistent policies and standards, accounting for cultural context, adequate infrastructure, proper professional training, attention to pedagogical issues, etc. See also Padilla and Zalles (2001) or Katz (2002) who insisted that the most important directions for policies in the area of ICT-based learning are educational (and not technological) access, literacy, matching budgeting with standard-setting activities, balancing access with privacy and customizing and personalizing service offerings.

Literature reviews:

Attitudes towards e-learning, reflected by scholarly and academic reviews, range from neutral to positive. On one hand, it is noted that e-learning (e.g., DE, CAI, etc.) is at least as effective as traditional instructional strategies ( Rosenberg, Grad, & Matear, 2003), and that there are no major differences in academic performance between the more traditional and more technology-oriented modes of instruction (Cavanaugh, 2001). On the other hand, many reviews go further, reflecting a particularly positive attitude towards the impact of e-learning (Kulik, 2003; Mayer, 2003; Steinberger, 2002). Benefits include offering a variety of new possibilities to learners (Breuleux, Laferrière, & Lamon, 2002), in addition to having a positive effect on students’ achievement in different subject matter areas (Chambers, 2003; Christmann & Badgett, 2003; Soe, Koki, & Chang, 2000).

However, reviews also acknowledge the need to address more closely design issues in e-learning courses and activities. Developing effective strategies for teaching and learning is also called for (Meredith & Newton, 2004; Oliver & Herrington, 2003). More specifically, designing instruction to compliment the positive attribute of computer technology is advocated by Berg (2000). Addressing learners’ needs in the design of e-learning activities is suggested by some reviews (Crawford, Gannon-Cook, & Rudnicki, 2002; Ewing-Taylor, 1999). Wherever the implementation issues are addressed, there seems to be a consensus among reviewers that effective use of e-learning requires the presence of immediate, extensive, and sustained support (Knowles, 2004; Sclater, Sicoly, & Grenier, 2005).

Nevertheless, reviews report a major concern regarding the absence of strong empirical evidence to support the use of e-learning (Torgerson & Elbourne, 2002; Urquhart et al., 2002; Whelan & Plass, 2002). One review considered the quality of research to be inadequate and called for more scientific rigour and less reliance on anecdotal evidence (Terrell, 2002). Another review emphasized that advances in DE technologies are outpacing research on their effectiveness (Hirumi, 2002). An extra obstacle facing the advancement of research in the field seems to be the fact that e-learning researchers are not all uniform in the methods used and questions asked (Cantoni & Rega, 2004).

Primary empirical studies:

A review of the empirical research conducted in Canada revealed several fairly broad consensuses. One point is that some learners are better prepared than others to use e-learning technologies to facilitate their educational progress; individual “readiness” seems to be a crucial factor in accounting for the success of e-learning applications in education. Cuneo, Campbell & Harnish (2002) list several individual characteristics that may determine the outcomes of technological interventions: motivation, computer skills, literacy skills, communication skills, and learning styles. In a separate paper, Cuneo and Harnish (2002) point out that “quasi-open computer-mediated environments are not safe places for students unsure of their writing skills and knowledge … online learning might not be appropriate for all students” (p. 19). Looker and Thiessen (2002), commenting on the “digital divide” for Canadian youth, remarked that access to, and experience with, computer technology determines “computer competency”, and that this competency is generally associated with urban residents of higher economic status. Their survey of Canadian high school students also indicated that females demonstrated less interest (and less confidence) in achieving computer competency. Bryson, Petrina and Braundy (2003) studied “gender-differentiated participation” in British Columbia schools, noting that “the current percentage of girls enrolled in technology-intensive courses remains extremely low, while performance data indicate that those female students who participate in these courses do better, on average, than male students in these courses” (p. 191). Levin and Arafeh (2002) have also remarked on the differences between students who are “internet-savvy” and those who have had little opportunity to develop their experience with networking tools. Dewar and Whittington (2000) concluded that adult learners’ learning styles (as indicated by Myers-Briggs personality types) can predict the pattern of their participation in online courses, while Li (2002) observed that, “female students tend to initiate conversations, while male students are more likely to enter the dialogue at later stages and respond to previous discussions” (p. 341). Individual metacognitive factors are also implicated in student success; Karsenti (2001) pointed to the relevance of self-direction and self-regulation in university students, concluding, “The main difficulty encountered by students seemed to be their lack of autonomy or the trouble they had in learning by themselves, in managing their own learning” (p. 33).

A second conclusion is that effective instructional design for e-learning applications does not resemble traditional pedagogical methods of information delivery and competence assessment. Cuneo and Harnish (2002) remarked, “The collaborative features of FirstClass [online conferencing software] are compatible with deep and comprehension learning – asking questions, seeking understandings, exploring ideas freely” (p. 19). Schnackenberg, Luik, Nisan, and Servant (2001) concluded, “Many teachers are unsure of what the curriculum objectives are regarding the use of computers at different grade levels … Not enough current pedagogical direction exists for teachers interested in integrating computer technology into the curriculum” (p. 152). Cheng and Myles (2003) suggest that “a change in mind-set” is needed for teachers to realize what they can do, and do better, with new technologies. According to Plante and Beattie’s (2004) survey of Canadian elementary and secondary school principals, “most teachers possessed the required technical skills to use ICT for administrative purposes such as preparing report cards, taking attendance or recording grades, while fewer had the necessary qualifications to effectively engage students in using ICT to enhance their learning” (p. 25). Murphy (2004) describes the need for research into how post-graduate learners, exploring complex and ill-structured problems, can use online asynchronous discussion to advance their learning in collaborative environments; Murphy and Coleman (2004) see this type of learning environment as “a medium that allows for a shift in the locus of communication, interaction and control from the teacher to the learner or from the one-to-many, teacher-to-students’ mode to a many-to-many, students-to-students’ mode” (p. 41). Kanuka, Collett & Caswell (2002) recommend that “risks” be taken in the design and development of new methods for exploiting technological tools.

Given the above, the third consensus, which arose from the primary research, is unsurprising: teachers require focused professional development training to learn how to gain optimal benefits from e-learning technologies. Jacobsen, Clifford, and Friesen (2002) note that “questions of how to prepare a new generation of teachers, many of whom have been schooled in old ways despite their relative youth, are increasingly pressing in their urgency” (p. 1). Morgan, White, Portal, Vanyan and Lazenby (2002) studied the implementation by the Toronto Catholic School Board of software designed to promote literacy skills; they surveyed the teachers, who indicated that their training for the project had been inadequate to the task. A survey of public school administrators in Alberta ( Irvine & Montgomery, 2001) revealed the collective attitude that ongoing in-service teacher training is a clear requirement for the Alberta school system; many respondents felt that university teaching instruction provided inadequate preparation in the realm of “curriculum integration.” Schnackenberg et al. (2001) recommended that the training of teachers needs be considered in the early stages of planning for technology integration, claiming, “if teachers were contacted in the planning stages, items such as training and support would not be overlooked” (p. 159).

Several studies agreed on a fourth conclusion that the collaborative methods afforded by online technologies facilitate the development of higher-order critical thinking, providing great potential for educative dialogues. Lapadat (2000) in particular credits the time available for reflection as allowing for the construction of “deeper levels of understanding” (p. 20) by university students; Lapadat (2004) claims that “discursive interaction in an asynchronous, text-based, online course may be uniquely suited to fostering higher-order thinking and social construction of meaning” (p. 236). Similarly, Pear and Crone-Todd (2002) found that cardiac nurses used online discussions to construct critical educational discourses to support their practices, while Gabriel and MacDonald (2002) reported that their students claimed that discursive participation in an online MBA had a “major impact on their lives.

A fifth message is that e-learning provides disabled students with previously unavailable educational opportunities. Artiss, Fitzpatrick, Hammet, Kong and Noftle (2001) reported “amazing success” in supporting disadvantaged adults in increasing their traditional and technological literacy. Fichten, Asuncion, Barile, Fossey and Robillard (2001) concluded that “the potential of computer and adaptive computer technologies to remove barriers for students with disabilities is enormous” (p. 52), and collaborative online educational processes have been shown to benefit the reintegration of students after traumatic brain injuries (Verburg, Borthwick, Bennet & Rumney, 2003).

Analysis of the Documents Retrieved and Coded

To orient the reader, we began with a selective summary of the major conclusions from the five sources of evidence. We now turn our attention to a more detailed and comprehensive analysis of the documents.

Table 1 presents the number of documents retrieved in each of the theme areas and Table 2 contains the diverse sources of evidence that were included in the review. These data are shown graphically in Figures 1 and 2. The preponderance of the documents pertains to elementary, secondary, and postsecondary education. There are far fewer articles which discuss adult education or health education, and very few which address early childhood learning. This lack of evidence limits the extent to which analyses of some theme areas were performed.




Table 3 shows how all the documents included in the review were distributed by the geographic region of applicability of the issues they covered. The decisions whether to code for geographical area (scope) of applicability were made whenever an explicit statement linking issues addressed (applications or methodology used and/or perspectives of further development considered) to a particular country (province, etc.) was detected or such a connection was obvious from the context.

Table 4 describes the distribution of the primary Canadian research literature by theme areas. Here we can see that the preponderance of the research on e-learning is at the postsecondary level with many fewer studies at the primary and secondary levels and fewer still in other theme areas.

We also examined the primary literature more closely by considering the types of evidence reported (see Table 5). The majority of the evidence was generated by qualitative studies which are especially well suited to descriptive investigations, and which may result in the generation of useful hypotheses.

A related picture emerged when we examined the types of research designs employed in the primary research. See Table 6. Few of the studies used experimental or quasi-experimental designs to explore causal explanations about the nature of what works.

In the primary research, the type of evidence and the type of research design was not distributed with equal proportion across the theme areas. For example, the percentage of RCTs was highest in the health and e-learning theme area.




Analysis of predictors and outcomes: All sources of evidence

We took a subset of the variables used to describe the literature in the Argument Catalogue codebook and used these as predictor and outcome variables. The predictor variables included pedagogical features (such as the provision of professional training), learner characteristics (such as gender), context effects (such as school location), and technology factors (such as intervention type). We recoded all of these predictor variables, except for the technology factors, to have dichotomous values, where “1” meant that the factor was considered in the document and “0” meant that the factor received no attention. The advantage of this recoding is that it allowed us to include sources of evidence where these features were mentioned as influential but no directional relationship was specified. For example, a newspaper article describing female students’ use of technology would be an instance of where we would code for gender as a relevant issue, (even though there might not be evidence in the article speaking directly to the benefit or gain of e-learning). Similarly, a policy document describing the need to analyse the effects of gender on the digital divide would also be coded as “1” for the gender factor.

Technology factors were extracted from the open-ended coding and were treated as categorical predictors. The means and standard deviations for the dichotomous predictors are given in the upper portion of Table 7. The means are equivalent to the proportion of articles that included the predictor. For example, the mean for professional training is 0.116 that means that professional training was specifically mentioned as important for effective e-learning in 11.16% of the documents.

Many of the articles raised issues about the importance of pedagogy in the effective application of e-learning. Of those, few addressed the issue of professional training while almost half mentioned infrastructure support and logistics, and slightly more than one-third were relevant to course development.

Few studies mentioned learner characteristics and school context effects as relevant to e-learning. This was somewhat of a surprise as we expected aspects of the digital divide, including gender, race/ethnicity, learners with special needs, and rural/urban location to feature more prominently in the literature. It is worth noting, however, that the prevalence of a predictor among the collection of documents does not necessarily relate to the extent to which a variable explains impacts of e-learning.

The last collection of predictors examines several aspects of technology. We used predictors which were recoded from the open-ended emergent factors identified by the coders and summarized initially in short text remarks. The resulting three variables were: Context of technology use (i.e., distance education, in class, blended, unspecified); Type of tools used (i.e., internet/intranet/on-line/web, virtual reality/learning objects/simulations, technology integration—computers and software for particular purposes, unspecified); and Intervention type (i.e., instructional, communicative, organizational, analytical/programming, recreational, expansive, creative, expressive, evaluative, informative, and unspecified/missing). Most of the documents focused on distance education and hybrid uses of technology; about one-third focused on the use of technology in face-to-face learning situations. Half of the tools used were network-based technologies but almost 20% looked at aspects of technology integration. Finally, the functional uses of technology described in the documents ranged from instructional to communicative and creative. Otherwise, we noted that the percentage of documents which described aspects of technology use overall was higher than the percentage of documents that described aspects of pedagogy.


Table 8 presents an analysis of the impact measures by source of evidence. Some of the impact measures were derived from actual measured results, but most were coding judgments and should be interpreted as perceived impacts.

The impact measures were recoded so that +1 was a positive outcome, 0 was a neutral or mixed outcome, and -1 was a negative outcome. Of the 726 documents we coded, the most represented impact was Learning flexibility with 324 documents (43%), followed by Achievement with 250 documents (33%), Interactivity/Communication with 237 documents (31%), Meeting Social Demands with 225 documents (30%), Motivation Satisfaction with 204 documents (27%), Cost with 100 documents (13%), and Attrition/Retention with 44 documents (5%). Of the 100 documents that mentioned cost issues, only six were from primary research evidence. Only 250 or about one-third specifically mentioned achievement impacts of e-learning. Oneway between groups analysis of variance were conducted on each outcome by source of evidence (IV). There was a significant difference on achievement impacts only (p < .05) although all average effects were positive. Practitioner positions were most positive while general/public opinions were least positive. Total impacts are shown graphically in Figure 3.



Table 9 presents an analysis of the impact measures by theme area. All the average impacts are in a positive direction. But note the dearth of evidence on learning and motivation for certain theme areas, particularly adult education, early childhood learning, and health and e-learning.


We next examined the interrelationship among the impact measures. See Table 10. As the measures are strongly correlated (except for “Cost”) we formed a composite measure, “Impact,” which is the average of the individual measures (except “Cost,” and excluding missing values). The composite measure is highly correlated with each of the individual measures and has the advantage of representing a larger number of documents than any single measure alone. It should also be noted that measures of impact include both measured and perceived (speculative or estimated) assessments of the effects of e-learning.


Which Predictor Variables are Associated with Perceived Technology Impacts Overall?

In this section we explore the relationship between the individual predictor variables and the composite measure of impact for all the sources of evidence taken together. Doing so combines evidence and opinion without regard for the direction of the relationship between variables coded dichotomously (e.g., applies, does not apply). In other words, we can conclude whether a factor is associated with technology impacts overall but not whether the direction of the influence is positive or negative.

In Table 11 we report the overall perceived impacts of e-learning across sources of evidence. The composite Impact measure is positive in a majority of the documents regardless of the source of evidence. If all the documents from a single source were positive the mean would be +1.00 and the standard deviation would be zero, indicating uniform evidence. That most of the averages were high, with small standard deviations, suggests that the view that e-learning has deleterious effects is an unpopular one, which has received little general support in the primary evidence and literature reviews of evidence.


An examination of the perceived impacts of e-learning across theme areas showed uniformly positive results except for early childhood education. See Table 12. There were only seven studies in this area. This may indicate apprehension about using e-learning for children prematurely, the difficulty of research, or a general lack of interest in the topic for young children.

According to Table 13, there were significant differences in the overall perceived impacts of e-learning depending on the context of technology use. Impacts are larger in distance learning contexts and smaller in face-to-face learning situations. The benefits of e-learning at a distance may be enhanced when technology is understood to be a necessary condition for learning and instruction.

Table 14 summarizes the significant differences in perceived impact when the nature of the technology tools are examined separately. Documents which highlight the use of virtual reality and simulations have the highest perceived impact of e-learning, though the total number of documents containing them is rather limited.




Table 15 focuses not on how the technology tools were designed but how the technology was used pedagogically. Perceived impacts of technology are believed to be larger for communication and student-led purposes and smaller for instructional and informative purposes. A possible interpretation of this finding focuses on using technology as an interactive tool for learning versus using technology for information gathering and information transmission.


Table 16 shows that the perceived impact of e-learning is higher among the documents that specifically mention the inclusion of professional training in the uses of technology for learning. This is consistent with the notion that professional development is one key to the effectiveness of educational technology.


Table 17 shows that the perceived impacts of e-learning are higher in documents that mention the importance of addressing technology infrastructure and logistics.


Tables 18 and 19 examine two aspects of context—location and setting. Both factors are related to perceived impacts.



Analysis of Predictors and Outcomes: Research Reviews and Primary Canadian Research

These two literatures together include 223 documents. So that we could examine best practices and procedures in the two literatures combined, we first tested the difference between them in terms of our composite impact measure using one-way ANOVA. Results revealed that the two bodies of literature were not significantly different regarding the impact of e-learning and so their data could be treated together. Means and standard deviation, plus the F-ratio, are shown in Table 20.


Next, we were interested in knowing if the results of the foregoing analyses could be generalized across all of the CCL Theme Areas. That is, are there differences in impact that differentiate adult education students, for instance, from those involved in Health Learning? Table 21 shows the results of this analysis. We found that mean impact did not vary across categories of Theme Area. The only caveat here is the low cell frequency for Adult Education and Health and Learning. Note that the category Early Childhood Education does not appear in the table. This is because we found that there are no documents that relate to this theme area in the literature we reviewed.


We began to address the issue of best practices by examining the context within which technology is applied in terms of its global impact. Three context categories were coded, ranging from standard face-to-face instruction to distance education (DE) where technology is a requirement of implementation. We found (Table 22) that these categories of context did not differ.


We examined the impact of various technologies (see Table 23) and found that, in terms of impact, none was better than another. However, note that the largest category frequency is DNA/Unspecified, suggesting that the research literature, unfortunately, does not always make the kind of technology investigated explicit.


One of the most important issues relating to best practices in e-learning is how technology is applied to instructional settings. The variable that addresses this issue is shown in Table 24 and is referred to as Pedagogical Implementation of Technology. The levels shown in Table 24 range from teacher-centered uses of technology to student-centered uses of technology. While the F-ratio does not reach significance, it is close enough to interpret the post hoc analysis that came after ANOVA. Post hoc analysis indicated that Instructional/Informative applications of technology significantly differed from Student Applications.


Among the variables of interest to both practitioners and policy makers is whether the introduction of professional training has an impact on the outcomes of technology use. Does it matter if teacher training, workshops, online help sites, etc. are worth the effort and expense associated with them? The research literature suggests that it does matter. Documents which contained evidence of the implementation of professional development had a higher impact mean than documents that did not mention it (See Table 25).


Similarly, we considered whether course development has an impact on the successful implementation of e-learning. While Table 26 indicates that the mean for global impact is higher in documents that consider course development, there is no significant difference.


A similar finding resulted (See Table 27) when the impact of Infrastructure and Logistics was considered.


The results of these analyses indicates an overall relationship between the predictor variables, namely the nature of technology use and pedagogical factors, and general impacts of e-learning. Furthermore, these relationships are generally similar to our conclusions when we included all sources of evidence in the analyses.

However, this should not be taken to mean that in given circumstances, with particular learner groups that these conditions apply universally, for at least two reasons. First, there is variability associated with the predictors and outcomes; individual relationships appear only modest in size. Second, and especially important, the quality and quantity of the evidence does not allow us to conclude with certainty the factors which impact on e-learning.

Quantitative Summary of the Canadian Primary Research on E-learning

What is a quantitative summary? As a final stage in this research, we undertook to identify the primary e-learning studies from the Canadian context that could be summarized quantitatively. This summary resembles a meta-analysis in its methodological approach, but is not as inclusive as a meta-analysis might be expected to be. The methodological approach to meta-analysis involves synthesizing the experimental and quasi-experimental studies from which an effect size can be extracted. An effect size is the standardized mean difference between a treatment condition (e-learning) and control condition (no e-learning). An effect size, therefore, is interpreted in units of standard deviation, and can be negative or positive. A positive effect size says that the treatment group outperformed the control group. A negative effect size says the reverse. Generally speaking, effect sizes up to ±0.25 are considered small and effect sizes up to ±0.75 are considered moderate. Effect sizes over ±0.75 are large, and when positive suggest that the treatment had a noticeable effect compared to the control condition.

A distribution of effect sizes can be averaged to produce an overall summary. The same standards of interpretation that apply to individual effect sizes also apply to the average. However, the variability among effect sizes around the mean must also be considered. Widely dispersed effect sizes (heterogeneous) are not representative of a single parameter in the population of effect sizes. In other words, the mean effect size cannot be unambiguously distinguished from other mean effect sizes in the same population.

When an overall mean effect size is heterogeneous, it is usually appropriate to explore this variability by introducing moderator variables. We did so here by exploring the relationship between effect size and type of research design.

Method of selecting studies. Our goal was to identify the studies, from the Canadian primary research literature, that could be included in a meta-analysis, to attempt to describe the measured impact of e-learning based on high-quality studies. We examined the 152 studies that we have previously described and analyzed and only found a total of 7 that were truly experimental (i.e., random assignment) and 10 that were quasi-experimental (i.e., not randomized but possessing a pretest and a posttest). The presence of a pretest makes a quasi-experimental design more interpretable than a non-randomized posttest only design. Within these 17 studies we identified 29 independent effect sizes (i.e., studies often contain more than one finding). Two raters, working independently, judged the number of effect sizes that could be derived from each study (89% agreement) and the calculation of the effect size that was extracted in each case (97% agreement).

Analysis and results. It is generally known that small sample means suffer greatly from the influence of outlying findings and so we decided to consider the composite collection of effect sizes as “the composite impact of e-learning,” rather than analyzing them separately by type of outcome measure. For the 29 effect sizes, Hedges’ g+ (unbiased mean) was +0.117, by the standards previously described, a small effect (see the last line in Table 28). Accordingly, approximately 54% of the e-learning participants performed at or above the mean of the control participants (50 th percentile), an advantage of 4%. However, the heterogeneity analysis was significant (the Q-value in Table 28), indicating that the effect sizes were widely dispersed. It is clearly not the case that e-learning is always the superior condition for educational impact. Our search for the ideal conditions continues. Elsewhere in this review we have noted the possible importance of pedagogical features as they relate to the integration of technology into learning. We continue to consider this to be an important direction for future research.

We then undertook an examination of the differential effects of true experiments and quasi-experiments. This analysis is also shown in Table 28. The mean effect size of the 20 true experiment effects was +0.109 and the mean effect size of the 9 quasi-experimental effects was +0.245.


Discussion

Introduction

The proliferation of, and interest in e-learning in Canada is unquestionable. However, it is somewhat uncertain how e-learning will proceed in the future, given the magnitude of the investment required to develop it, sustain it and cause it to grow. We undertook this review to provide answers to a host of questions regarding research and development of e-learning over the past five years. While history is not always the best guide to future decision-making, it does provide a level of evidence that goes beyond decision-making based on opinion, politics, conjecture and posturing. We believe that the foregoing review will provide a set of lenses through which those areas of successful and unsuccessful implementation can be brought into focus.

An uncritical assessment of the evidence from myriad sources provides a fairly consistent and positive picture. Conclusions from Canadian primary research, international literature reviews, policy documents, media reports, and practitioner publications are mostly favourable. Yet a closer examination of the evidence paints a less convincing portrait or “rough sketch.” In Canada there is a lack of evidence in some theme areas, notably early childhood learning, and a lack of experimental and quasi-experimental evidence that would allow unambiguous causal conclusions to be drawn about effectiveness. The quality and scope of the research evidence does not match the time, cost and resources that have been and will be dedicated to the development and implementation of e-learning. One of our most striking conclusions—one that bears close consideration by practitioners and policy makers alike—is the need for programs of development for new initiatives that have high-quality research and evaluation programs or components built-in as a forethought, not an afterthought. And we believe there is a need for large-scale, longitudinal and intensive implementations of e-learning which go beyond investigations of short-term impact. Such initiatives require the attention of specially trained educational technologists, working alongside content experts and other educators.

In the health sciences, new methods are implemented following a progression from small scale, controlled laboratory experimentation, to controlled field trials, to wide scale field investigations and follow-up. In Canada’s e-learning endeavours, we need to consider whether such a model of inquiry may bear fruit and whether the expenses of pan-Canadian efforts at scalable inquiry are truly worth the costs.

What We Know: Consensual Validation

We were surprised at the remarkable consistency that emerged across the sources of evidence and, to a lesser extent, across the CCL theme areas. Opinions among policy makers, practitioners, the general public, plus evidence from Canadian primary research and a variety of literature reviews, all point to the positive impacts of e-learning on achievement, motivation, communication, learning flexibility and meeting social demands. But the consensual evidence alone does not reveal what accounts for these positive impressions across the multiple constituencies represented in this study.

What We Know: The Evidence

Major Messages: Sample of Document Conclusions

In the Results, we included a section on the analysis of the most representative messages found in each of the five document sources: general public opinion; practitioner literature; policy documents; scholarly reviews; and primary Canadian research. What follows is a summary of the major messages that were derived from each of these sources.

General Public Opinion:

Practitioners:

Policy documents:

Reviews:

Primary studies:

Findings from Analyses of Data

We examined the data from a number of perspectives, including that of its quality as evidence. Because of the categories of sources we chose to examine, from public opinion to primary research, the quality of evidence varies accordingly. In the primary research literature we examined the kinds of research designs that were used. We found that over half of the studies conducted in Canada are qualitative in nature, with the other nearly half split between surveys and quantitative studies (correlational and experimental). When we looked at the nature of the research designs, again, 51% are qualitative case studies and 15.8% are experimental or quasi-experimental studies. It seems that studies that can help us understand “what works” in e-learning settings are underrepresented in the Canadian research literature.

Generally speaking, the data can be classified as either outcomes (we refer to them as “impacts.”) or predictors. Outcomes/impacts are the perceived or measured benefits of e-learning, whereas predictors are the conditions or features of e-learning that can potentially affect the outcomes/impacts. The impacts were coded on a positive to negative scale and included: 1) achievement; 2) motivation/satisfaction; 3) interactivity/communication; 4) meeting social demands; 5) retention/attrition; 6) learning flexibility; and 7) cost. Based on an analysis of the correlations among these impacts, we subsequently collapsed them (all but cost) into a single impact scale, ranging from –1 to +1. We found, generally, that the perception of impact or actual measured impact varies across the types of documents. They appear to be higher in general opinion documents, practitioner documents and policy making reports than in scholarly reviews and primary research. While this represents an expression of hope for positive impact, on the one hand, it possibly represents reality on the other. Where there were sufficient documents to examine and code, impact was high across each of the CCL Theme Areas. Health and Learning was the highest, with a mean of 0.80 and Elementary/Secondary was the lowest, with a mean of 0.77. However, there was no significant difference between these means.

The perceived impact of e-learning and technology use was higher in distance education, where its presence is required (Mean = 0.80) and lower in face-to-face instructional settings (Mean = 0.60) where its presence is not required. Network-based technologies (e.g., Internet, Web-based, CMC) produced a higher impact score (Mean = 0.72) than straight technology integration in educational settings (Mean = 0.66), although this difference was considered negligible. Interestingly, among the Pedagogical Uses of Technology, student applications (i.e., students using technology) and communication applications (both Mean = 0.78) had a higher impact score than instructional or informative uses (Mean = 0.63). This result suggests that the student manipulation of technology in achieving the goals of education is preferable to teacher manipulation of technology.

In terms of predictor variables (professional training, course design, infrastructure/ logistics, type of learners—general population, special needs, gifted, gender issues and ethnicity/race/religion/aboriginal—location, location, school setting, context of technology use, type of tool used and pedagogical function of technology) we found the following:

Our quantitative summary of the primary Canadian studies for which effect sizes could be calculated tends to support the general impression of the positive benefits of e-learning. While the mean effect size is not large (g+ = 0.117), it is nonetheless in a positive direction. However, the wide variability suggests that the effectiveness of e-learning is not always guaranteed. Given that we as educators are still on the threshold of understanding how to design courses to maximize the potentials of technology, this finding represents a hopeful sign. More attention must be given to exploring the interactive effects of learning about technology and learning with technology, while realizing all the while that it is not the panacea that some suggest it to be.

The CCL Objectives

The objectives of this review, as stated by the Canadian Council on Learning, are as follows: 1) Identify and verify through research the most effective practices and procedures to promote learning; 2) Identify major gaps in our knowledge and understanding of e-learning; and 3) Identify the most promising lines of inquiry for addressing those gaps.

The approach that we took in this review—the use of frequency counts, weak measures of outcomes and relatively few coded predictor variables—does not readily present us with evidence of best practices and “what works” in e-learning. This is the job that a meta-analysis could accomplish by estimating the average strength of various instructional treatments and the variability associated with them. Even if we had originally intended to do this on the primary empirical literature of Canada, a task that could have been managed in the time frame of this review, the relatively small corpus of quantitative research conducted in Canada would have rendered this task nearly impossible to achieve, and would probably have resulted in our having to say “we can’t tell from the data.” However, this approach to systematic review has revealed where many research efforts have been placed and where the gaps in our knowledge lie. Overall, we know that research in e-learning has not been a Canadian priority; the culture of educational technology research, as distinct from development, has not taken on great import like it has in the U.S., except as it appears to have happened in the business sector. Historically, there have been few successful centres, institutes or networks of research in educational technology that would generate the kinds of Canadian research and evaluation evidence of learning impacts upon which strong conclusions can be based. Likewise, there is no mandate for evidence-based policy making (as there is in the U.K., for instance) that places issues such as research in the development of e-learning as a major pan-Canadian research priority.

In addition, there appears to have been a disproportionate emphasis on qualitative research in the Canadian e-learning research culture. While we laud any and all quality research efforts, the relative de-emphasis on quantitative research means that its areas of strength—those types of questions that can be answered through experimental and quasi-experimental studies—are not as well represented in the literature as would be desirable. Does this mean that our conclusions for innovation must be based primarily on investigation in foreign contexts? Regrettably, the answer is probably yes.

We noted that there are gaps in areas of research related to early childhood education and adult education. It is fairly easy to understand the former, but it is harder to decipher the latter. It may be that the larger bodies of knowledge on secondary and postsecondary education result from the direct involvement of universities in these research efforts. Naturally, we would encourage a greater interest from school boards and districts in conducting research, and we would like to see emphasis placed on rigorous implementation and quality design and analysis. We see academic/practitioner/policy maker partnerships as a very desirable way to accommodate the multiple perspectives that are needed to answer these important and timely questions. Furthermore, the time and contribution of these various parties needs to be recognized and supported while there needs to be a separation of vested interests, particularly in the analysis and interpretation stages, to increase objectivity and to ensure credibility.

Finally, we believe that more emphasis must be placed on implementing longitudinal research, whether qualitative or quantitative (preferable a mixture of the two), and that all development efforts be accompanied by strong evaluation components that focus on learning impact. It is a shame to attempt innovation and not be able to tell why it works or doesn’t work. In this sense, the finest laboratories for e-learning research are the institutions in which it is being applied.

Limitations and Future Directions

The biggest unanswered question for policy makers and practitioners concerns whether e-learning is worth the cost. The generally positive results on achievement, motivation, and other outcomes suggest there is positive evidence as well as positive opinions among the stakeholders. But the fascinating and largely unanswered question, concerns the emphasis which is placed on deployment, attendant costs, and what one might take away, or not add, given the expense of an e-learning curriculum. While in distance learning environments this may be less of an issue since technology is a requirement of operation, in face-to-face and hybrid/blended learning situations many policy makers struggle with decisions regarding the choices between innovative pedagogy and innovative technology. We don’t have firm answers for them yet.

While we did not highlight them in this review, we are aware that there are promising areas of new development focusing on the specific applications of technology listed below.

We share with others our personal enthusiasm for the potential of these inquires to bear fruit. But here too, if the impact of e-learning on students and educators is paramount, then we need to consider, as an integral part of development, whether and to what extent there are meaningful pedagogical advantages to the new applications. Do, in fact, students using the new technologies learn more, or better? Do, in fact, educators improve their teaching practices? Are the tools readily usable? How much professional development and training is needed? To borrow an idea from the film “Field of Dreams”: if you build it, will they come? In other words, the promise of e-learning with new developments must be matched by a consideration of when, how, where, and by whom these new developments will be used. We suggest that needs assessments, front-end analyses, and other forms of usability assessment accompany tests of effectiveness, efficiency and cost-benefit.

We emphasized Canadian primary research to determine the nature and extent of evidence in our country. Our ability to compare Canadian evidence with research from other countries was limited and indirect. We included only literature reviews of non-Canadian primary research. We agree with the importance of examining what is known within the nation on this important topic but we must also find ways to blend our knowledge with what is learned elsewhere. Some topics have a uniquely Canadian focus, such as language, culture, and geography, and may need to be studied separately. But other topics, such as the pedagogical qualities and types of technology use which impact on the quality and quantity of learning and motivation, not only deserve attention within Canada but also in combination with what we can learn from the evidence collected elsewhere. Given the magnitude of evidence that exists beyond our borders, and particularly from the U.S., providing a proper review in the time frame allotted was impossible.

This review of e-learning attempted to go beyond traditional reviews of the empirical literature by including the evidence and opinions from a variety of sources. Because of the scope of this undertaking, its novelty, and the time constraints under which we operated, we have been able to provide only a rough portrait of the evidence and opinions. In doing so, we attempted to strike a balance between the extraordinary breadth of our review and the depth we wanted to achieve in conducting a detailed analysis of the documents. And even while we insured coverage of five distinct sources of evidence, we did not do so comprehensively for all of them. A more elaborate review and Argument Catalogue is called for, to explore the documents in greater depth, and to incorporate evidence from international efforts at primary research, in particular.

While we incorporated a large number of variables, we are certain that a finer analysis of the literature would yield far more. For example, our review did not incorporate evidence on subject matter, duration of treatment or newer developments in technology software and courseware. In our recently published meta-analysis of the empirical-comparative literature of distance education (Bernard et al. 2004), we coded 54 study features related to the methodological quality of the studies, pedagogy, media and technology, context, etc. of the 236 studies we included. This meta-analysis required four years to complete at an estimated cost many times greater than the current review. We envision a study of the e-learning literature to require nearly that amount of time and resources.

We did not examine the evidence from a theoretical perspective, in part because of the time limits, but primarily because there is little in the way of theory-testing research on e-learning that can be synthesized. The importance of doing this may depend on whether e-learning is a transparent tool for learning which merely delivers pedagogy and content (e.g., Clark, 1983), or a transformative one, which impacts how pedagogy and content are blended and delivered (e.g., Dede, 1996, 2000; Ullner, 1994). Investigations of computer conferencing (e.g., Bures, Abrami, & Amundsen, 2000), text-weaving, and knowledge forums may be worth a detailed synthesis of evidence and new primary evidence. Similarly, research on interactive multimedia (Mayer, 2003) and embedded multimedia (Chambers, Cheung, Gifford, Madden, & Slavin, in press) may explore what is unique about e-learning. But we also need good questions, either inductive or deductive, through which key propositions can be explored.

There were also methodological challenges and shortcomings to this review and our use of an argument catalogue to synthesize views on e-learning. We did not weight outcomes (i.e., effect sizes) by their size and complexity. In traditional meta-analyses, for example, the findings from studies are weighted by sample size so that larger studies have more weight. We did not weight the primary evidence by sample size but instead treated each study equally. Similarly, we did not account for the size and scope of literature reviews and policy documents in aggregating evidence from these sources. A small review of several pages was given the same weight as a lengthy one. In future, we might consider given more weight to papers as a function of their scope.

Finally, our analyses of evidence, including the primary evidence, are based merely on frequency analyses or vote counts of impacts without regard to the methodological quality of the evidence. We combined all sources of evidence, regardless of the type and quality of design, in part because applying more stringent criteria would have left too few documents to analyse. We also did not compute effect sizes and cannot estimate the magnitude of the impact of technology integration on achievement, motivation, dropout, and other outcomes. Vote counts tell us more about the consistency of effects and not about their size. But size matters for policy decisions, as do considerations of cost. future research should explore both.

In most systematic reviews, especially quantitative syntheses of primary evidence, considerable effort is expended in judging the quality of the evidence using a plethora of methodological criteria often focusing on, but not necessarily limited to, a study’s internal validity or the certainty with which causal inferences are likely. Not only did we not apply these quality judgments to the primary Canadian studies as exclusion criteria (given the limited number of studies which would meet rigorous standards), but we applied no quality criteria whatsoever in judging the other sources of evidence. We did, however, compare the primary evidence, literature reviews, and the opinions we extracted from other sources to identify similarities and differences. Nevertheless, we are sensitive to the possibility that a review based on an Argument Catalogue gives voice to popular perceptions and the zeitgeist of current views, whether they are formed carefully or carelessly. We believe the advantages of a comprehensive review like ours are worth the risks and the costs of broad inclusivity. But the final judgment may be the readers’.

What is Needed to Effect and Support Effective E-learning

In education, there is the mistaken view, repeated over the generations: 1) that technology represents a “magical solution” to the a range of problems affecting schools and learners; and 2) that money for technology alone, thrown in large enough quantities at the problems of education, will affect the kinds of changes that are required to produce a well-informed, literate and numerate citizenry. It is probably true that the wide range of electronic technologies (including those that provide access to the Internet) that are now, and will remain available stand a better chance of affecting educational change than the technologies of film, television, learning machines, intelligent tutoring systems, etc. However, it has never been the case that money alone solves problems unless it is invested in equal amounts in human and physical resources. We found among the many coded variables that might be classified as support for e-learning, that reference to professional development ranked the lowest across all document types and CCL theme groups (11.16% of all documents examined). By contrast, nearly half of the documents we examined (47.8%) referred to infrastructure and logistical support or e-learning. It is arguable that the education of Canadians would be better served by more emphasis on preparing and training practitioners to use technology effectively than rushing to adopt the “technology du jour.”

Richard E. Clark (1983, 2000) has long argued that technology alone is of little importance in effecting achievement gains in students. Course design and instructional strategies are far more important aspects of effective teaching than the technology that is used. The Bernard et al. (2004) meta-analysis found evidence that pedagogical study features account for more variation in achievement outcomes that media-related study features, at least in the case of distance education. In addition, there are other aspects of instructional design that contribute to the successful implementation of any technology-based course design. There are more than 100 ISD models, but to illustrate the point, the steps in the model from Dick, Carey and Carey (2001) is shown below. The stages in their development model are:

More often than not these or a similar set of steps are not considered in the design of instruction or the design of the larger learning systems into which technology is being integrated. The result is often wasted money, wasted effort and a system that does not achieve what it is intended to achieve. Unfortunately, the literature does not appear to document the successes or the failures of the use or misuse of instructional design and development models, except in the context of corporate training. Since the literature of corporate training was beyond the scope of this study, we are unable to establish this empirically.

Evaluation of Our Methodology

In a previous section we detailed the limitations of this review, which are mostly attributable to the ninety-day time frame in which it was accomplished. But what about the effectiveness of the approach to systematic review—the Argument Catalogue—that we have pioneered and applied here? Several comments about the somewhat unconventional searches that were implemented are appropriate. Systematically reviewing the academic literature is relatively straightforward, given the large range of retrieval tools that are available internationally. However, when it comes to media, practitioner, and policy documents, the retrieval tools are not nearly as sophisticated, organized or complete. Controlled vocabulary may not be used or may be quite broad due to the multidisciplinary nature of the index, abstracts may not be included in the index thus requiring the retrieval of the full document to determine its relevancy, policy and trade documents may be scattered across different indices, access to foreign material may be hampered, dates of coverage of the index may be limited and so on. This makes searching more challenging and time-consuming, and could perhaps, limit the reliability of the data collected. We do not believe this to have been a particular problem here given our focus on retrieving current Canadian material, but sufficient time is required to fully develop and enact proper searches in various international literatures that have somewhat different characteristics, search terms and the like.

This being said, we fully believe that giving voice to the other, often not considered, sources of evidence, whether they are mere opinion, based on practical experience or derived from empirical research is important in developing a complete portrait of a field which touches Canadians at so many different levels, and which requires such a substantial investment in human and material resources.

Elsewhere in this report we have described the primary advantages of an Argument Catalogue as: 1) the ability to “frame” the literature for the purposes of performing other kinds of systematic reviews (e.g., meta-analysis); 2) showing the overlap among the various perspectives; and 3) revealing the gaps or discrepancies between different perspectives on a given topic or question. Given the limitations previously noted, but based on what we have achieved, we fully believe that the notion of an Argument Catalogue to be well worth pursuing, and it is our intention to continue to develop and perfect the methodologies associated with it.

Acknowledgement

The Canadian Council on Learning funded this review under a contract to Abrami, Bernard, Wade and Schmid. The opinions expressed herein are solely those of the authors. Inquiries about the review should be directed to Dr. Philip C. Abrami, Centre for the Study of Learning and Performance, Concordia University, 1455 DeMaisonneuve Blvd. W., Montreal, Quebec, H3G 1M8. Phone: 514-848-2424 x2102. E-mail: abrami@education.concordia.ca. Web site: http://doe.concordia.ca/cslp/

References

Abrami, P. C. (2001). Understanding and promoting complex learning using technology. Educational Research and Evaluation, 7 (2-3), 113–136.

Amiot, M. A. (2001, Aout 25). L’autre revolution d’internet: L’apprentissage en ligne. La Presse, p. c1.

Artiss, P., Fitzpatrick, L., Hammett, R. F., Kong, X., & Noftle, E. A. (2001). Friendly neighborhood computers: Action research in adult literacy [Report]. (ERIC Document Reproduction Service No. ED455399).

Bagui, S. (1998). Reasons for increased learning using multimedia. Journal of Educational Multimedia and Hypermedia, 7(1), 3–18.

Bell , L., McCoy, V., & Peters, T. (2002). E-books go to college. Library Journal,127(8), 44–46.

Berg, G. A. (2000). Human-computer interaction (HCI) in educational environments: Implications of understanding computers as media. Journal of Educational Multimedia and Hypermedia,9(4), 347–368.

Bernard, R. M., Abrami, P. C., Lou, Y., Borokhovski, E., Wade, A., Wozney, L., & et al. (2004). How does distance education compare with classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research,74(3), 379–439.

The birth of the Celtic Tiger. (2003, June 9). National Post, p. BE1.Fr.

Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.). (2000). How people learn: Brain, mind, experience, and school. Commission on Developments in the Science of Learning, National Research Council. Washington, DC: National Academy Press.

Breuleux, A., Laferrière, T., & Lamon, M. (2002, May). Capacity building within and accross countries into the effective uses of ICTs . Paper presented at the 2002 Pan-Canadian Education Research Agenda Symposium, Montreal, QC. Retrieved October 3, 2005 from http://www.cesc.ca/pcera2002E.html

Brown, J. S. (Feb. 2002). Growing up digital: How the web changes work, education, and the ways people learn. USDLA Journal16(2). Retrieved October 4, 2004, from http://www.usdla.org/html/journal/FEB02_Issue/article01.html

Bryson, M., Petrina, S., & Braundy, M. (2003). Conditions for success? Gender in technology-intensive courses in British Columbia secondary schools. Canadian Journal of Science, Mathematics and Technology Education,3(2), 185–193.

Bures, E., Abrami, P. C., & Amundsen, C. (2000). Student motivation to learn via computer-conferencing. Research in Higher Education,41(5), 593–621.

Cantoni, L., & Rega, I. (2004). Looking for fixed stars in the elearning community: A research on referenced lit. in SITE proceeding book from 1994–2001. Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications (Vol. 2004, pp. 4697–4704). Retrieved September 12, 2005 from AACE Digital Library database.

Caplan, D. (2005). The development of online courses. In T. Anderson & F. Elloumi (Eds.), Theory and practice of online learning. Creative Commons, Athabasca University, 175–194. Retrieved September 12, 2005, from http://cde.athabascau.ca/online_book

Cavanaugh, C. S. (2001). The effectiveness of interactive distance education technologies in K-12 learning: A meta-analysis. International Journal of Educational Telecommunications,7(1), 73–88.

Chambers, E. A. (2003). Efficacy of educational technology in elementary and secondary classrooms: A meta-analysis of the research literature from 1992–2002. Ph.D. dissertation, Southern Illinois University at Carbondale. Retrieved November 8 2005, from ProQuest Digital Dissertations database. (Publication No. AAT 3065343).

Chambers, B, Cheung, A., Gifford, R, Madden, N. A, & Slavin, R. E. (in press). Achievement Effects of Embedded Media in a Success for All Reading Program. Journal of Educational Psychology.

Cheng, L., & Myles, J. (2003). Managing the change from on-site to online: Transforming ESL courses for teachers. Open Learning,18(1), 29.

Christmann, E. P., & Badgett, J. L. (2003). A meta-analitic comparison of the effects of computer-assisted instruction on elementary students’ academic achievment. Information Technology in Childhood Education Annual,15, 91–104.

Clark, R. E. (1983). Reconsidering research on learning from media. Review of Educational Research, 53(4), 445–459.

Clark, R.E. (2000). Evaluating distance education: Strategies, and cautions. Quarterly Review of Distance Education, 1(1), 3–16.

Clark, R. E., & Sugrue, B. M. (1995). Research on instructional media, 1978–1988. In G. Anglin (Ed.), Instructional technology (pp. 327–343). Englewood, CO: Libraries Unlimited.

Classroom computers get failing grade: Fixing high-tech tools keeps teachers from their primary tasks. (2000, December 1). National Post, p. A1.

CMEC (2001). Education ministers call upon federal government to invest in connectivity in budget. Retrieved June 22, 2004, from http://www.cmec.ca/releases/20011115.en.asp

Computers and the young: They can impede development, one group of experts in the US says. Meanwhile, Quebec is forging ahead with its new-technology school curriculum. (2000, September 18). The Gazette, p. A3.

Contact North enjoys high enrolment in Tri-Towns. (2002, December 9). Sudbury Star, p. A2.

Craig, D. V. (2001). View from an electronic learning environment: Perceptions and patters among students in an online graduate education course. Journal of Educational Technology Systems, 30(2), 197–219.

Crawford, C. M., Gannon-Cook, R., & Rudnicki, A. (2002). Perceived and actual interactive activities in elearning environments. Proceedings of World Conference on E-Learning in Corp., Govt., Health, & Higher Ed. (Vol. 2002, pp. 917–920). Retrieved September 12, 2005 from AACE Digital Library database.

Crichton, S., & Kinsel, E. (2000). Communities in transition: Technology as a tool for change. Education Canada,40(3), 44.

Cuban, L., Kirkpatrick, H., & Peck, C. (2001). High access and low use of technologies in high school classrooms: explaining an apparent paradox. American Educational Research Journal,38(4), 813–834.

Cuneo , C. J., Campbell, B., & Harnish, D. (2002, May). The integration and effectiveness of ICTs in Canadian postsecondary education. Paper presented at the 2002 Pan-Canadian Education Research Agenda Symposium, Montreal, QC.

Cuneo , C. J., & Harnish, D. (2002). The lost generation in e-learning: Deep and surface approaches to online learning [Report]. Hamilton, ON: McMaster University. (ERIC Document Reproduction Service No. ED466646).

Davis, N. E., & Carlsen, R. (2004). A comprehensive synthesis of research into ICT in education. In T. van Weert, (Ed.), Proceedings of World Summit on the Information Society Forum. Amsterdam: Kluwer Academic Publishers. Retrieved October 12, 2005 from http://www.iastate.edu/~ilet/reading_groups/Pdf_files/wsis.pdf

Dede, C. (1996). The evolution of distance education: Emerging technologies and distributed learning. Educational Technology, 35(5), 46–52.

Dede, C. (2000). Implications of emerging information technologies for States’ educational policies. 2000 State Educational Technology Conference papers, State Leadership Center, Council of Chief State School Officers.

Dewar, T., & Whittington, D. (2000). Online learners and their learning strategies. Journal of Educational Computing Research,23(4), 385–403.

Dibbon, D. (2000, June 21). Virtual classrooms: As e-commerce has done with business, e- learning will revolutionize the way we teach children. Telegram, p. 6.

Dick, W., Carey, L., & Carey, J. O. (2001). The systematic design of instruction (5 th ed.). NY: Addison-Wesley.

Downes, S. ( 2000). Learning objects. Retrieved June 13, 2001, from www.atl.ualberta.ca/downes/namwb/column000523_1.htm

Ewing-Taylor, J. (1999). Student attitude toward web-based courses. Retrieved August 23, 2005, from http://unr.edu/homepage/jacque/research/student_attitudes.html

Feenberg, A. (n.d.). The “TEXTWEAVER” – Active reading hypertext for computer conferencing. Retrieved September 22, 2005, from http://www-rohan.sdsu.edu/faculty/feenberg/textweaver/hyper.html

Fichten, C. S., Asuncion, J. V., Barile, M., Fossey, M. E., & Robillard, C. (2001). Computer technologies for postsecondary students with disabilities I: Comparison of student and service provider perspectives. Journal of Postsecondary Education and Disability,15(1), 28–58.

Fielden, J. (2001). Markets for ‘borderless education.’ Minerva: London,39(1), 49–62.

Foster, S. (2002, June 25). Distance can be a deterrent. Leader Post, p. B3.

Friesen, N. (2004). Three objections to learning objects. In R. McGreal (Ed.). Online education using learning objects. London: Routledge/Falmer.

Gabriel, M. A., & Macdonald, C. J. (2002). Working together: The context of teams in an online MBA program. Canadian Journal of Learning and Technology,28(2), 49–65. Retrieved September 24, 2005, from http://www.cjlt.ca/content/vol28.2/gabriel_mcdonald.html

Haddow, S. H. (2003). Distance no barrier for learners. Northern Ontario Business,23(6), 15.

Hadwin, A. F., & Winne, P. H. (2001). CoNoteS2: A software tool for promoting self-regulation. Educational Research and Evaluation,7(2–3), 313–334.

Hamilton, T. (2002, December 9). E-learning shrinks space, expands ideas; E-learning may help minds meet; Distance education works out the kinks; University project had a few glitches but it shows new ways minds may meet. Toronto Star, p. D.01.

Hannafin, M. J., & Land, S. M. (1997). The foundations and assumptions of technology-enhanced student-centered learning environments. Instructional Science, 25, 167–202.

Harasim, L., Hiltz, S.R., Teles, L., & Turoff, M. (1995). Learning networks: A field guide to teaching and learning on-line. Cambridge, MA: MIT Press.

Harley, D. (2002). Investing in educational technologies: The challenge of reconciling institutional strategies, faculty goals, and student expectations. Center for Studies in Higher Education, University of California. (CSHE7’02).

Healy, J. M. (1998). Failure to connect: How computers affect children’s minds-for better or worse. New York: Simon & Schuster.

Hillman, R. (2000). Reach homebound students through technology. Multimedia Schools,7(2), 78–80.

Hirumi, A. (2002). The design and sequencing of e-learning interactions: A grounded approach. International Journal of E-Learning,1(1), 19–27.

Hofmann, J., & Dunkling, G. (2002). Best practices in blended elearning. Proceedings of World Conference on E-Learning in Corp., Govt., Health, & Higher Ed. (Vol. 2002, pp. 2491–2493).

IEEE (2002). Draft standard for learning object metadata. Retrieved March 24, 2003 from http://ltsc.ieee.org/doc/wg12/LOM_WD6_4.pdf

Industry Canada, Advisory Committee for Online Learning. (2001). The e-learning e-volution in colleges and universities: A Pan-Canadian challenge. (Cat. No. C2-549/2001E). Ottawa, ON: Information Distribution Centre, Communication Branch, Industry Canada. Retrieved August 15, 2005 fromhttp://www.cmec.ca/postsec/evolution.en.pdf

Internet for all. (2001, July 2). Toronto Star, p. A10.

Irvine, V., & Montgomerie, T. C. (2001). A survey of current computer skill standards and implications for teacher education [Report]. (ERIC Document Reproduction Service No. ED466176).

Jacobsen, M., Clifford, P., & Friesen, S. (2002). New ways of preparing teachers for technology integration. Norfolk, VA: Association for the Advancement of Computing in Education (AACE). (ERIC Document Reproduction Service No. ED477032).

Johnston, S., & Mitchell, M. (2000). Teaching the FHS way. Multimedia Schools,7(4), 52.

Jonassen, D. H., Howland, J., Moore, J., & Marra, R. M. (2003). Learning to Solve Problems with Technology: A Constructivist Perspective, 2 nd Ed. Upper Saddle River , NJ : Merrill Prentice Hall.

Kanuka, H., Collett, D., & Caswell, C. (2002). University instructor perceptions of the use of asynchronous text-based discussion in distance courses. American Journal of Distance Education,16(3), 151–167.

Karsenti, T. (2001). From blackboard to mouse pad: Training teachers for the new millennium. Education Canada,41(2), 32–35.

Katz, R. N. (2002). The ict infrastructure: a driver of change. EDUCAUSE Review,37(4), 50–61.

Kelly, K., Lord, M., & Marcus, D. L. (2000, September 25). False promise. U.S. News & World Report,129, p. 48.

Kinuthia, W. (2004, October). Impediments to faculty engaging in web-based instruction: Clarification of governing policies. Paper presented at the Association for Educational Communications and Technology Conference, Chicago (ERIC Document Reproduction Service No. ED485016).

Knowles, J. A. (2004). Pedagogical and policy challenges in implementing e-learning in social work education. Ph.D. dissertation, University of Alberta ( Canada). Retrieved November 8, 2005 from ProQuest Digital Dissertations database. (Publication No. ATT NQ95955).

Koper, R. (2005). An introduction to learning design. In Koper, R., & Tattersall, C (Eds.), Learning design – A handbook on modelling and delivering networked education and training. Berlin, Heidelberg, New York: Springer.

Kuh, G. D., & Vesper, N. (2001). Do computers enhance or detract from student learning? Research in Higher Education, 42(1). 87–102.

Kukulska-Hulme, A. & Traxler, J. (Eds.). (2005). Mobile learning: A handbook for educators and trainers. Open and flexible learning series, London, UK: Routledge.

Kulik, J. A. (2003). Effects of using instructional technology in colleges and universities: What controlled evaluation studies say. (Final Report). Arlington, VA: SRI International. (P10446.003). Retrieved October 12, 2005, from http://sri.com/policy/csted/reports/sandt/it/Kulik_IT_in_colleges_and_universities.pdf

Lapadat, J. C. (2000). Teaching online: Breaking new ground in collaborative thinking [Report]. (ERIC Document Reproduction Service No. ED443420).

Lapadat, J. C. (2004). Online teaching: Creating text-based environments for collaborative thinking. Alberta Journal of Educational Research,50(3), 236–251.

Laurillard, D. (2002). Rethinking university teaching: A framework for the effective use of educational technology (2 nd Ed.). London: Routledge.

Levin, D., & Arafeh, S. (2002). The digital disconnect: The widening gap between Internet-savvy students and their schools [Report]. Washington, DC: Pew Internet & American Life Project. Retrieved September 6, 2005, from http://www.pewinternet.org

Lewis, B., & Jenson, J. (2001). Beyond the workshop: Education policy in situated practice. Education Canada,41(3), 28.

Li, Q. (2002). Gender and computer-mediated communication: An exploration of elementary students’ mathematics and science learning. Journal of Computers in Mathematics and Science Teaching,21(4), 341–359.

Looker, D., & Thiessen, V. (2002, May). The digital divide in Canadian schools: Factors affecting student access to and use of information technology. Paper presented at the 2002 Pan-Canadian Education Research Agenda Symposium, Montreal, QC.

Lou, Y., Abrami, P.C., & d’Apollonia, S. (2001). Small group and individual learning with technology: A meta-analysis. Review of Educational Research, 71(3), 449–521.

Lowerison, G., Sclater, J., Schmid, R., & Abrami, P. C. (2006). Student perceived effectiveness of computer technology use in postsecondary classrooms. Computers & Education, 47(4), 465–489.

Mayer, R. E. (2003). The promise of multimedia learning: Using the same instructional design methods across different media. Learning and Instruction,13 125–139.

McCombs, B. L.(2000). Assessing the role of educational technology in the teaching and learning process: A learner-centered perspective. The Secretary’s Conference on Educational Technology 2000. Retrieved June 28, 2001, from http://www.ed.gov/Technology/techconf/2000/mccombs_ paper.html

McCombs, B.L. (2001). What do we know about learners and learning? The learner-centered framework: Bringing the educational system into balance. Educational Horizons, 79(4), 182–193.

McGreal, R., & Roberts, T. (2001). A primer on metadata for learning objects:

Fostering an interoperable environment. E-Learning, 2(10). [Online] Retrieved June 11, 2002, from http://elearningmag.com/elearning/article/articleDetail.jsp?id=2031

McGreal, R., Anderson, T., Babin, G., Downes, S., Friesen, N., Harrigan, K. et al. (2004). EduSource: Canada’s learning object repository network. Retrieved June 2, 2005, from http://www.itdl.org/Journal/Mar_04/article01.htm.

McKenzie, J. (2002). Tech smart: Making discerning technology choices. Multimedia Schools,9(2), 34.

Meredith, S., & Newton, B. (2004). Models of e-learning: Technology promise vs learner needs literature review. The International Journal of Management Education,4(1), 43–56.

Merisotis, J. P. (2001). Quality and equality in Internet-based higher education: Benchmarks for success. Washington, DC: Institute for Higher Education Policy. (ERIC Document Reproduction Service no. ED469341).

Morgan, J., White, N., Portal, A., Vanyan, M., & Lasenby, J. (2002, May). The use of computer technology for literacy intervention: Factors contributing to the use of computer-delivered skills-based literacy software. Paper presented at the 2002 Pan-Canadian Education Research Agenda Symposium, Montreal, QC.

Mühlhauser, M. (2004). Elearning after four decades: What about sustainability? Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications (Vol. 2004, pp. 3694–3700). Retrieved September 12, 2005 from AACE Digital Library database.

Murphy, E. (2004). Identifying and measuring ill-structured problem formulation and resolution in online asynchronous discussions. Canadian Journal of Learning and Technology,30 (1), Retrieved September 24, 2005, from http://www.cjlt.ca/content/vol30.1/cjlt30-1_art1.html

Murphy, E., & Coleman, E. (2004). Graduate students’ experiences of challenges in online asynchronous discussions. Canadian Journal of Learning and Technology,30(2). Retrieved September 26, 2005 from http://www.cjlt.ca/content/vol30.2/cjlt30-2_art-2.html

Nelson, F. A. (2001). Technology for what? Education Canada,41(3), 38.

New Media Consortium. (2004). The horizon report – 2004 edition. Retrieved August 12, 2005, from www.nmc.org/pdf/2004 Horizon Report.pdf

Noble, D., Sheiderman, B., Herman, R., Agre, P., & Denning, P.J. (1998). Technology in education: The fight for the future. Educom Review, 33(3), 22–20, 32–34.

Oakley, W., & Stevens, K. (2000). TeleLearning: A lifelong opportunity for Canadian students. Education Canada,40(2), 32–33,42.

Oliver, M., & Conole, G. (2003). Evidence-based practice and e-learning in higher education: Can we and should we? Research Papers in Education,18(4), 385–397.

Oliver, R., & Herrington, J. (2003). Exploring technology-mediated learning from a pedagogical perspective. Interactive Learning Environments,11(2), 111–126.

Online courses grow in popularity. (2003, October 18). Leader Post, p. H6.

Online learning opens up options: Computers bring school to busy lives. (2003, November 23). Calgary Herald, p. A1.FRO.

Padilla, C., & Zalles, D. (2001). Capacity-building partnerships: Improving the evaluation of education technology. A final report of the Partnerships Subtask of the Evaluation of Educational Technology Policy and Practice for the 21 st Century. (Report submitted to Planning and Evaluation Services, U.S. Dept. of Education). Arlington, VA: SRI International.

Paquette, G. (2003). Instructional engineering in networked environments. Toronto: Wiley Publishers.

Paquette, G. (2004). Instructional engineering for learning objects repositories networks. Proceedings of International Conference on Computer Aided Learning in Engineering Education (CALIE 04), Grenoble, France. Retrieved June 22, 2005, from http://www-clips.imag.fr/calie04/actes/Paquette.pdf

Paquette, G., & Rosca, I. (2002). Organic aggregation of knowledge objects in educational systems, Canadian Journal of Learning Technologies [Electronic version], 28(3). Available from: http://www.cjlt.ca/content/vol28.3/paquette_rosca.html

Paquette, G., Lundgren-Cayrol, K., Miara, A., & Guérette, L. (2003). The Explor@-2 learning object manager, in R. McGreal (Ed.), Online education using learning objects. London UK: Routledge.

Pear, J. J., & Crone-Todd, D. E. (2002). A social constructivist approach to computer-mediated instruction. Computers & Education,38(1–3), 221–231.

Pittard, V. (2004). Evidence for e-learning policy. Technology, Pedagogy and Education,13(2), 181–194.

Plante, J., & Beattie, D. (2004). Connectivity and ICT integration in Canadian elementary and secondary schools: First results from the Information and Communication Technologies in Schools Survey, 2003–2004. (81-595-MIE2004017). Ottawa, ON: Statistics Canada.

Rodrigue, S. (2001, Novembre 17). A vos marques, prets, en ligne! La Presse, p. 10.

Rosenberg, H., Grad, H. A., & Matear, D. W. (2003). The effectiveness of computer-aid, self-instructional programs in dental education: A systematic review of the literature. Journal of Dental Education,67(4), 524–532.

Rossiter, J. (2002). An e-learning vision: Towards a Pan-Canadian strategy and action plan. (CANARIE Discussion paper). Ottawa, ON: CANARIE Inc. Retrieved August 15, 2005, from http://www.canarie.ca/funding/elearning/elearningvision.pdf

Russell, T. L. (1999). The no significant difference phenomenon. Raleigh, NC: North Carolina State University Press.

Scardamalia, M., & Bereiter, C. (1996). Computer support for knowledge-building communities. In T. Koschmann, (Ed.), CSCL: Theory and practice of an emerging paradigm. Mahwah, NJ: Erbaum.

Schnackenberg, H. L., Luik, K., Nisan, Y. C., & Servant, C. (2001). A case study of needs assessment in teacher in-service development. Educational Research and Evaluation: An International Journal on Theory and Practice,7(2–3), 137–160.

Sclater, J., Sicoly, F., & Grenier, A. (2005). ETSB-CSLP laptop research partnership: SchoolNet report: Preliminary study. Montreal, QC: Concordia University, Centre for the Study of Learning and Performance. Retrieved October 12, 2005 from http://doe.concordia.ca/cslp/Downloads/PDF/ETSB_final_report_0628.pdf

Shapiro, B. (2002). Higher education in the new century: Some history, some challenges. Education Canada,42(1), 12.

Shuell, T. J., & Farber, S. L. (2001). Student perceptions of technology use in college courses. Journal of Educational Computing Research, 24, 119–138.

Siemens, G. (2005). Connectivism: A learning theory for the digital age. International Journal of Instructional Technology and Distance Learning, 2(1), 3–10.

Sinnema, J. (2005, January 29). E-learning the way of the future, educator says. Edmonton Journal, p. A6.

Soe, K., Koki, S., & Chang, J. M. (2000). Effect of computer-assisted instruction (CAI) on reading achievement: A meta-analysis. Honolulu , HI : Pacific Resources for Education and Learning. (ERIC Document Reproduction Service No. ED443079).

Steinberger, C. (2002). Wireless meets wireline elearning. Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications (Vol. 2002, pp. 1858–1863). Retrieved September 12, 2005 from AACE Digital Library database.

Summerfield, R. (2000, September 5). Post-secondary overflow benefits small universities: Athabasca U. can handle increase in enrolment. Calgary Herald, p. B4.

Technology v textbooks: Costly computer systems compromise other essential teaching needs. (2001, September 8). National Post, p. E11.

Terrell, E. (2002). The effectiveness of educational technology: Will the “enhancing education through technology act of 2001” really expand our knowledge of teaching and learning with technology? Norfolk, VA: Association for the Advancement of Computing in Education (AACE). (ERIC Document Reproduction Service No. ED479620).

Thibodeau, M. (2001, Fevrier 9). Les univeristes Canadiennes menacees par l’apprentissage en ligne? La Presse, p. a8.

Torgerson, C. J., & Elbourne, D. (2002). A systematic review and meta-analysis of the effectiveness of information and communication technology (ict) on the teaching of spelling. Journal of Research in Reading,25(2), 129–143.

Tuinman, J. (2000). Educational technology: The answer to a fundamental educational agenda. Education Canada,40(2), 24.

Ullner, E.J. (1994). Media and learning: Are these two kinds of truth? Educational Technology Research & Development, 42(1), 21–32.

UNESCO – Division of Higher Education. (2002). Information and communication technologies in teacher education: A planning guide. Paris, France: UNESCO- Division of Higher Education.

Ungerleider, C., & Burns, T. (2002). Information and communication technologies in elementary and secondary education: a state of art review. Prepared for 2002 Pan-Canadian Education Research Agenda Symposium, Montreal, QC.

Urquhart, C., Chambers, M., Connor, S., Lewis, L., Murphy, J., Roberts, R., & et al. (2002). Evaluation of distance learning delivery of health information management and health informatics programmes: a UK perspective. Health Informatin and Libraries Journal,19(3), 146–157.

Verburg, G., Borthwick, B., Bennett, B., & Rumney, P. (2003). Online support to facilitate the reintegration of students with brain injury: Trials and errors. NeuroRehabilitation,18(2), 113–123.

Wake, B. (2000, September 5). No more teachers, no more books? Not too far-fetched. Kingston Whig-Standard, p. 36.

Wanless, T. (2000, June 30). E-learning growing. Leader Post, p. D1.FRO.

WBEC (Web-Based Education Commission). (2000). The power of Internet for learning: Moving from promise to practice, [Online]. Retrieved June 22, 2004, from http://www.hpcnet.org/upload/wbec/report/WBECReport.pdf

Whelan, R., & Plass, J. (2002). Is e-learning effective? A review of literature from 1993–2001. Proceedings of World Conference on e-learning in Corp., Health, & Higher Ed. (Vol. 2002, pp. 1026–1028). Retrieved September 12, 2005 from AACE Digital Library database.

Wiley, D. (2000). Connecting learning objects to Instructional Design theory: A definition, a metaphor, and a taxonomy. In D.A. Wiley (Ed.), The instructional use of learning objects. [online version]. Retrieved July 12, 2005, from http://www.reusability.org/read/

Wiley, D. (2003). Learning objects: Difficulties and opportunities. Retrieved September 16, 2003, from http://wiley.ed.usu.edu/docs/lo_do.pdf

Winne, P. H., Nesbit, J. C., Kumar, V., & Hadwin, A. F., Lajoie, S. P., Azevedo, R. A., & et al. (in press). Supporting self-regulated learning with gStudy software: The Learning Kit Project. Technology, Instruction, Cognition and Learning.

Wood, L. (2000, October 3). Schools in cyberspace: virtual study. National Post, p. E4.

Yazon, J.M., Mayer-Smith, J.A. & Redfield, R.J. (2002). Does the medium change the message? The impact of a web-based genetics course on university students’ perspectives on learning and teaching. Computers & Education, 38(1), 267–285.


ISSN: 1499-6685