Canadian Journal of Learning and Technology

Volume 32(3) Fall / automne 2006

A Review of e-Learning in Canada: A Rough Sketch of the Evidence, Gaps and Promising Directions: A Commentary

Heather Kanuka

Author

Heather Kanuka is Associate Professor and Canada Research Chair in eLearning at Athabasca University. She can be reached at heatherk@athabascau.ca

The purpose of the Review of e-Learning in Canada, conducted by Abrami and associates (which I refer to as ‘the team’), was to provide evidence and gaps on the topic of e-learning, and based on this data provide promising directions for e-learning. Funded by the Canadian Council on Learning, the stated objectives of the review were to: 1) identify and verify through research the most effective practices and procedures to promote learning; 2) identify major gaps in our knowledge and understanding of e-learning; and 3) identify the most promising lines of inquiry for addressing those gaps. The review covers the literature from 2000 forward with an attempt to focus on Canadian literature, in both official languages. Using an Argument Catalogue codebook a number of classes were identified by the researchers upon which the review was framed. The documents were also coded to provide data on outcomes of the e-learning research from the following perspectives: general public opinion, practitioner literature, policy documents, scholarly reviews, and primary Canadian research.

Did the team achieve their objectives? And do their outcomes provide us with promising directions for e-learning? This commentary will begin by deconstructing the activities of this review and conclude with comments on how well the objectives of this review were met.

Introduction

The introduction begins with the following explanation of the term e-learning:

E-learning has become the general term encompassing the application of computer technologies to education, whether it occurs in face-to-face classrooms, in blended and hybrid courses, in mediated distance education contexts or in online learning environments. The Canadian Council on Learning (CCL) defines e-learning as the development of knowledge and skills through the use of information and communication technologies (ICTs) particularly to support interactions for learning—interactions with content, with learning activities and tools, and with other people. It is not merely content-related, not limited to a particular technology and can be a component of blended or hybrid learning. (Rossiter, 2002; also 2005 in an address at the CCL Workshop on E-Learning). The CCL recognizes the breadth of the concept of learning through electronic means, as well as its growing pervasiveness in Canadian institutions of formal learning, at the level of elementary and secondary schools and of colleges, and universities, as well as in early childhood and health related learning. (p.9)

The term e-learning, when used in everyday language, is rich in multiple meanings and, as such, beginning with an encompassing definition is essential. Whether the reader agrees or disagrees with the team’s (or CCL’s) description of the term e-learning, establishing meaning early in the introduction is essential so readers are operating with the same set of understandings.

Following the definition of the term e-learning is a nicely constructed overview of e-learning from both enthusiasts and sceptics. It is also here that the biases and undeclared assumptions of the research team begin to emerge. The team cites an article from Maclean’s magazine entitled “How computers make our kids stupid”. They note that this article, which is one person’s opinion, criticises the use of technologies. They observe further that this exposé does not provide an accurate, complete or balanced portrait of the use of technologies in the learning process, making the point that this reporter’s personal perceptions of technologies in the classroom are unfounded. While not disagreeing with the team’s criticism of Maclean’s article, it is certainly possible to find many anecdotes and opinion stories advocating the benefits of technologies—which have also been compelling, and generating instant and widespread reaction. Although the point the team is trying to illustrate with their Maclean’s example is that researchers have to “sift through the accumulated evidence to form an accurate picture about scientific knowledge” (p. 10)—it is revealing that they have selected an opinion story that takes a critical perspective of technology to make this point. Selection of this example thinly veils a bias 1) toward positive uses of technologies and 2) against personal opinions as a valid form of research. I will re-visit this latter point in the closing section of this commentary.

Approaches and Methodology

This section provides a detailed explanation of the review process. The study is well grounded in a framework for the review process, based on an analysis of news media practitioner opinions and policy documents—which the team refers to as their “Argument Catalogue”. The team articulated the standards upon which they strove to achieve as:

Did the team achieve these standards? Based on the document provided to me, the literature reviewed was current. Was it comprehensive? Based on the team’s working paradigm, the review was as comprehensive as could be expected given time constraints put on the team. On the second standard, was the process for the sources located described in sufficient detail to be repeatable? The team has provided a thorough account of their activities, including where the sources were gathered from, the criteria for inclusion (or not) and how the data were analysed. Though, it should be noted that the links on the CSLP website which provides access to the complete citations and Argument Catalogue codebook was not operational (404 error). It was also stated by the team that there would be emphasis on both French and English sources. However, in the report I counted only three French sources, all of which were from La Presse—a privately-owned daily French language newspaper aimed at a middle-class readership.

Results

In regard to the team’s standard on the use of a precise methodology, did they ensure accurate results? The selective summary (sample of document conclusions) of results brings forward an interesting overview and, indeed, provides us with a rough sketch of the evidence and gaps in e-learning within each category. The team has described their findings in a concise, yet encompassing manner—while also acknowledging the difficulties and challenges encountered with such reviews. The sample conclusions provided in the summary included many important documents and research that has had an impact on e-learning in Canada.

Less clear to determine is whether the team achieved this standard in regard to the theme areas. On the first part of this standard, I found the methodology to be, as claimed, precise, well articulated, and transparent. The team leaders are well known and highly regarded for their expertise on literature reviews and this study, consistent with their prior work, was no exception. What is less clear is whether the conclusions are accurate. In particular, the report provides a rationale for the categorization of the impacts of e-learning. Unfortunately, the report does not specify delimitations for these impacts. For example, achievement is one of the areas identified as an important impact on e-learning. But what does achievement mean? Does it mean successful completion of a course? Or a program? Does it mean achieving better grades than other forms of learning? If achievement is defined as better grades, what kinds of learning outcomes were researched? For example, how is achievement with e-learning different from surface learning versus deep learning (e.g., Biggs, 1999; Entwhistle & Ramsden, 1983; Prosser & Trigwell, 1991; Trigwell & Prosser, 1991)? Or was achievement based on learning domains (e.g., Bloom, 1956; Gagné, 1965)? If so, how is achievement different with e-learning in the cognitive domain versus the affective domain versus the psychomotor domain? Or does e-learning impact all of these domains equally? Does e-learning impact achievement equally across the disciplines? Extensive research conducted by Donald (2002), for example, on learning to think across the disciplines revealed that there are significant differences in thinking, validation processes and learning activities between disciplines. Donald’s research shows that the validation process for English involves critiquing other’s claims, peer reviews and testing the parts against the whole, while psychology uses inter-rater reliability and empirical testing. Given the significant differences between these disciplines, it would be useful to know if achievement with e-learning between disciplines is the same.

Also missing in the review on achievement are research findings which have revealed students infrequently engage in the communicative processes that comprise critical discourse—an essential component of achievement as it relates to higher levels of learning (see for examples: Aviv, Zippy, Ravid & Geva, 2003; Bonk & Cunningham, 1998; Bullen, 1999; Davis & Rouzie, 2002; De Laat, 2001; Garrison, Anderson & Archer, 2001; Gunawardena, Carabajal & Lowe, 2001; Gunawardena, Lowe & Anderson, 1997; Jeong, 2004; Kanuka, 2005; Kanuka & Anderson, 1998; Lopez-Islas, 2001; McKlin, Harmon, Evans & Jones, 2002; McLaughlin & Luca, 2000; Meyer, 2003; Nussbaum, Hartley, Sinatra, Reynolds & Bendixen, 2002; Pawan, Paulus, Yalcin & Chang, 2003; Pena-Shaff, 2005; Pena-Shaff, Martin, & Gay, 2001; Pena-Shaff & Nicholls, 2004; Rourke, 2005; Rovai & Barnum, 2003; Thomas, 2002; Vaughan & Garrison, 2005; Veerman, Andriessen, & Kanselaar, 2000; Wilson, et al., 2003; Yakimovicz & Murphy, 1995). Research conducted by Angeli, Valanides, and Bonk (2003) is representative of many of these studies’ conclusions: “students primarily share personal experiences amongst themselves, and their responses appeared to be subjective and naїve at times. Students’ discourse was also extremely conversational and opinionated and showed little evidence of critical thinking” (p. 40).

It has been difficult for most of us concerned with e-learning in higher education to ignore these disappointing results—and, yet, these findings have not been reflected in the team’s review of the literature. Indeed, a conclusion by the team is that “online technologies facilitate the development of higher-order critical thinking; providing great potential for educative dialogues” (p. 24). Canadian researchers (e.g., Garrison, Anderson, & Archer, 2001; Kanuka, 2005; Kanuka & Anderson, 1998; Kanuka, Rourke & Laflamme, in press, Rourke, 2005; Vaughan & Garrison, 2005) have had research results that do not support this conclusion. This inconsistency is, probably, because the literature review conducted by the team was carried out using keyword searches, frequency counts and statistical analysis for significance—which I assume (as I cannot access the website for a complete set of references) included the well known, and well documented, findings that knowledge acquisition (or surface learning) in computer-based courses is significantly better, or not significantly different (e.g. Pascarella & Terenzini, 2005; Russell, 1999), and combined with the more recent, and not yet as well documented, research on higher levels of learning (or deep learning). Such findings, in part, provide a misleading conclusion that e-learning has a positive impact on achievement. I state with intent ‘in part’, as these findings may reflect achievement with respect to knowledge acquisition, but not with respect to certain aspects of higher levels of learning, such as critical discourse. Most reviews of the research acknowledge these kinds of complexities. Pascarella and Terenzini, for example, qualify their review of the literature in a number of areas; they observe that engagement is essential to their findings:

A student’s coursework and classroom experiences shape both the nature and extent of his or her acquisition of subject matter knowledge and academic skills … what the student does to exploit the academic opportunities provided by the institution may have an equal, if not greater, influence … other things being equal, the more the student is psychologically engaged in activities and tasks that reinforce the formal academic experience, the more he or she will learn. (p. 119)

Though the team does acknowledge that “there is variability associated with the predictors and outcomes [and] … the quality and quantity of evidence does not allow us to conclude with certainty the factors which impact on e-learning” (p. 43). Nevertheless, the team has provided us with conclusions. The problems I have identified with the conclusions made by the team on achievement can be applied to each of the impacts investigated. How, for example, was the construct of motivation operationalized? Were the sources reviewed related to Houle’s (1961) framework based on activity-oriented (social) motivation, learning orientated motivation or goal oriented motivation? Or was it based on Tough’s (1979) motivational framework for adults which suggests we are motivated to learn (or not) based on self-esteem, pleasing and impressing others, or other certain satisfactions? As with achievement, does e-learning impact each of these aspects of motivation equally? And can we apply the results to all of the theme areas (adult education, early childhood education, elementary/secondary education, postsecondary education, health and learning) equally?

Perhaps the most problematic of these constructs, for me, was the interactivity/communication impact. The difficulty I had with this impact was twofold. The first troubling aspect was trying to understand why the team placed interactivity in the same category as communication. Interacting with an automated computer-generated response, for example, is not the same as communicating with one’s peers or instructors. Secondly, we know from the extensive research in the field of communication that different communication technologies have different impacts on how we communicate. Given that the most critical aspect of the teaching-learning process is effective communication, understanding interpersonal communication with networked communication technologies is worthy of a careful review and a thorough explanation of how we communicate with technologies. We know, for example, that communication is a complex multimodal process that involves not only speech, but also gaze, gesture, and facial expressions (Clark, 1996; Clark & Brennan, 1991). Research in this area has given rise to the belief that multimodal technologies (such as video conferencing that provides both speech and vision) provide more effective communication than single mode technologies (such as audio conferencing or e-mail and listservs). A review of this research shows that speech alone can be as effective as speech plus video, under certain circumstances speech can be as effective as face-to-face communication, and video is not significantly different from speech communication (Reid, 1977; Whittaker, 2003). Some research has even revealed that adding visual information may impair critical aspects of spoken communication (Anderson, et al., 2000; Whittaker & O’Conaill, 1997). Further, there is evidence from the research on communication media which indicates that audio systems (e.g., Elluminate, Centra), and audio and video systems (e.g., Web cams) can provide more effective interpersonal interactions than text-based communication systems alone (Collett, Kanuka, Blanchette & Goodale., 1999). Though, this research should be interpreted with caution, as low quality video systems (e.g., discontiguous visual and audio transmissions) may provide distractions to a point where the communication process and the quality can be severely eroded (Whittaker, 1995; Whittaker, 2003).

Some research within the field of communication has also focused on aspects of media richness and/or the effects of filtered cues. Results of this research suggest that different communication media affect groups largely through differential transmission of social context cues (or paralinguistic cues). Text-based computer mediated communication is considered to be ‘social cueing poor’ as it limits the exchange of paralinguistic and interpersonal cues (e.g., age, sex, physical appearance) and other physical surroundings. Social cueing is an important aspect that facilitates and regulates interpersonal interaction, information and monitors feedback (Straus, 1997). Reductions in social cues through the use of reduced-channel media (e.g., text-based communication tools) disrupts the flow of communication causing difficulty following and understanding discussions (Straus & McGrath, 1994), which can result in diminishing the intensity of interpersonal interactions and social connectedness—as well as increasing a sense of anonymity and feelings of depersonalization (Straus, 1997). In a depersonalized context, in turn, there can be reduced motivation to share personal information and/or inquire about others, as well as reduced expressive communication (Hiltz, Johnson & Turoff, 1986; McGuire, Kiesler & Siegel, 1987). Explanations for these results tend to revolve around the belief that the time and effort required to type versus speak results in considerably less communication in text-based discussions than face-to-face—in addition to difficulties in following and understanding the text without supplementary social cues, adding to the cognitive workload (Straus, 1997).

This kind full descriptive examination on what the literature tells us about communication technologies illustrates this construct is complex, with implications for educators on not only the kind of communication technologies, but also the contexts—in contrast to the statistical analysis and numerical tables provided by the team which tells us if the aggregated results are significantly better, or not. Consequently, it is difficult to know if their analysis is accurate because each of the impacts investigated (achievement, motivation/satisfaction, interactivity/communication, social demands, attrition/retention, learning flexibility, cost) are complex constructs that vary in effectiveness under different circumstances. As the team has acknowledged, while the literature points to the positive impacts of e-learning on achievement, motivation, communication, learning flexibility and meeting social demands, this “consensual evidence alone does not reveal what accounts for these positive impressions” (p. 46).

Perhaps even more important than the absence of boundaries on the impacts, this report does not include the consequences of the increased use of e-learning technologies within each of the impacts reviewed. Following is a quote from a participant in a recent study I conducted, which illustrates why looking at the consequences is important:

Let’s just think about this as we might about Pfizer. What might they say or do to ascertain their success [or positive impact]? Well they might say that 90% of their patients who’ve taken their drug have recovered from chronic renal failure. However, unreported is the 97% who got brain cancer from the drug … We can only say there is no significant difference in outcomes if we only look at the apparent production of apparent knowledge outcomes. So, like Pfizer, we can produce data that makes us feel good, like the number of students enrolled in e-learning … and that could then be a justification for more of this … but it wouldn’t indicate that it is good and what the consequences are. It just indicates that a lot of people are prepared to put a lot of money and time into it.

This quote illustrates the point that philosophers of technologies have argued for decades (e.g., Chandler, 1996; Ihde, 1979; Winner, 1993), that research on technologies has not, and currently does not, investigate the consequences. A technology is designed to serve a specific purpose (intentionality), and as such, it amplifies that aspect (selectivity) of our use. But in the process it also unavoidably reduces other aspects of our experience. For example, we use a telephone to increase our ability to communicate with others, achieved through the ability to gain greater independence of place when communicating verbally (amplification). But the experience is less real, as a consequence, than person-to-person communication in that we have a loss of many paralinguistic cues (reduction). How much this loss matters to us depends on whether such consequences are in harmony with our overall intentions, as side effects can, of course, be positive as well as negative. In this respect, Winner (1993) has argued that technological artefacts may embody affirmation, but may also become a betrayal: “The same devices that have brought wonderful conveniences in transportation and communication have also tended to erode community. In the maxim of theologian Richard Penniman, ‘They got what they wanted, but they lost what they had’ ” (p. 371). Winner maintains that the interesting research questions have little at all to do with any alleged self-generating properties of modern technology; rather, they have to do with the often-painful ironies of technical choice. Noticeably absent in this review are often-painful ironies, or consequences, of the use of e-learning technologies.

Discussion

The report concludes with a list of what is currently known about e-learning. I found it interesting to read in the introduction of this report a statement by the team that “it is a shame to attempt innovation and not be able to tell why it works or doesn’t work” (p. 3), and yet this is exactly what the team has provided as a summary conclusion “What [bold added] We Know: The Evidence”. Indeed, it is a shame these summary messages do not tell us why it works, or doesn’t work, and more importantly under what conditions e-learning is most effective, and when it isn’t, as well as the consequences. Though, to their credit, the team does acknowledge this problem in their discussion: “this review … does not readily present us with evidence of best practices and “what works” in e-learning” (p. 50). However, if we go back to page 11, we can see that a primary objective of the review was to “identify the most effective practices and procedures to promote learning.” On this point, I can only conclude that the team did not achieve a central objective of this literature review.

Concluding comments

The methodological approach that we use in our research activities is reflective of our epistemological assumptions, and reveals the nature of knowledge we value, in particular its foundations, scope, and validity. For this review, the team took a systematic compilation of current literature and created an Argument Catalogue codebook.

While claiming to examine data from a number of perspectives, the team noted (twice) that:

We found that over half of the studies conducted in Canada are qualitative in nature, with the other nearly half split between surveys and quantitative studies (correlational and experimental). When we looked at the nature of the research designs, again, 51% are qualitative case studies and 15.8% are experimental or quasi-experimental studies. It seems that studies that can help us understand “what works” in e-learning settings are underrepresented in the Canadian research literature. (p.1; p. 48)

Clarification about what the team means by collecting data from a number of perspectives is needed. In particular, what is being implied by the team in the above quote about qualitative research—in particular the reference to the under representation of ‘what works’—is puzzling. Specifically, while it would seem we can conclude that the team believes that qualitative research tells us nothing in regard to what works they also make mention that:

We fully believe that giving voice to the other, often not considered, sources of evidence, whether they are mere opinion, based on practical experience or derived from empirical research is important in developing a complete portrait of a field which touches Canadians at so many different levels, and which requires such a substantial investment in human and material resources. (p. 57)

Without further explanation, these statements by the team contradict each other, leaving me to wonder what it is that they considered reliable sources for their review. A careful read, however, reveals the team’s actual views on this point. In the introduction the team clearly dismissed the Maclean’s article, based on the fact that it was “one reporter’s attempt to weave together personal anecdotes, stories, and observations” (p. 10). The Maclean’s article, which is ‘merely one person’s opinion’, was criticised for lacking evidence and being inaccurate, incomplete, and unbalanced. As such, while the team ‘says’: “We fully believe that giving voice to the other, often not considered, sources of evidence, whether they are mere opinion, based on practical experience or derived from empirical research is important in developing a complete portrait of a field”, in fact, they do not. Rather, it appears they give a privileged voice to the “15.8% experimental or quasi-experimental studies” which, according to the team, is the only kind of research which tells us “what works”.

These kinds of inconsistencies could be easily be avoided if the team had articulated their research assumptions at the beginning of the report and stayed true to their assumptions, rather than being politically correct by giving false piety to the diversity of voices. As individuals, each of us holds a set of assumptions about how we believe knowledge is created and, based on our assumptions, we overlay a framework—or a working research design. Based on the research design for this literature review, it is apparent that the team is working on the assumption that, given a large enough sample and conducting statistical analysis for significance, we can make predictions about e-learning.

Perhaps more importantly, rather than lamenting over the number of qualitative studies, in particular, case studies, the team might have asked the question: Why is so much of the e-learning research approached from the qualitative paradigm? A good place to begin responding to this question is to acknowledge that despite the polemic nature of the Maclean’s article, the e-learning research community must face the reality that our efforts have failed to provide adequate guidance for developers and practitioners. As Reeves has noted on several occasions (1995; 1999):

Much of the research in IT is grounded in a “realist” philosophy of science, i.e., conducted under the assumption that education is part of an objective reality governed by natural laws and therefore can be studied in a manner similar to other natural sciences such as chemistry and biology. If this assumption about the nature of the phenomena we study is erroneous (and I believe it is), then we inevitably ask the wrong questions in our research. Further, even if there are underlying laws that influence learning, the complexity inherent in these laws may defy our ability to perceive, much less control, them (Casti, 1994). Cronbach (1975) pointed out two decades ago, our empirical research may be doomed to failure because we simply cannot pile up generalizations fast enough to adapt our instructional treatments to the myriad of variables inherent in any given instance of instruction. (¶ 15)

Many e-learning researchers conduct their research based on the assumptions that guide natural laws (and that includes this team) and, therefore, also assume e-learning can be studied with systematic and rule-governed methodologies. However, this type of research has resulted in a large body of knowledge in e-learning which has little social relevance due precisely to the fact that it does not reflect the real world messiness of the everyday problems that face e-learning practitioners and, therefore, makes little contribution to how we can make e-learning better. In this regard, Reeves (1995) argues for ‘socially responsible’ research—or research that aims to make education better, by finding practical solutions to the everyday problems faced by e-learning practitioners. Of course all good research is time consuming and hard to do, but it takes more than hard work to conduct socially responsible research; it takes a desire to discover what the everyday problems are for e-learning practitioners and a recognition that we live in a continually evolving world, comprised of human beings who are complicated, resulting in ‘messy’ environments—or what Schön (1983) referred to as “the swampy lowlands” of professional practice. A key element to generating socially responsible research is the use of more emergent methodology that can effectively address the ‘real world’ messiness that e-learning practitioners encounter routinely. On this point, Shulman (1997; see also Cronbach & Suppes, 1969), similar to Reeves (1995), argues that “disciplined inquiry does not necessarily follow well-established, formal procedures. Some of the most excellent inquiry is free-ranging and speculative in its initial stages, trying what might seem to be bizarre combinations of ideas and procedures, or restlessly casting about for ideas” (p. 8).

I’d like to put a different spin on the observation by the team that “there appears to have been a disproportionate emphasis on qualitative research in the Canadian e-learning research culture” (p. 3, p. 51). Contrary to the team’s concern, I infer from this that the majority of educators are using emergent research paradigms (some might refer to as ‘free-ranging’ or ‘bizarre’) in an attempt to conduct socially responsible research—the results of which provide useful guidance for e-learning developers and practitioners.

In closing, I’d like to add that reviews of the literature are very difficult to do well, remarbably time consuming, easy for others to criticize, and one of the most valuable forms of publications when done well. Although I have cited areas that I consider problematic, this is a comprehensive and well constructed literature review. Educators in all sectors who are involved with e-learning should take the time to read the team’s review.

References

Anderson, A. H., Smallwood, L., MacDonald, R., Mullin, J., Fleming, A., & O’Malley, C. (2000). Video data and video links in mediated communication: what do users value? International Journal of Human-Computer Studies, 52(1), 165–187.

Angeli, C., Valanides, N., & Bonk, C. J. (2003). Communication in a web-based conferencing system: The quality of computer-mediated interactions. British Journal of Educational Technology, 34(1), 31–43.

Aviv, R., Zippy, E., Ravid, G., & Geva, A. (2003). Network analysis of knowledge construction in asynchronous learning networks. Journal of Asynchronous Learning Networks, 7(3). Retrieved December 30, 2005, from http://www.sloan-c.org/publications/jaln/v7n3/v7n3_aviv.asp

Biggs, J. B. (1999). Teaching for quality learning at university. Buckingham: SRHE and Open University Press.

Bloom, B. (1956). Taxonomy of educational objectives. New-York: Longmans-Green.

Bonk, C., & Cunningham, D. (1998). Searching for constructivist, learner-centered and sociocultural components for collaborative educational learning tools. In C. Bonk & K. King (Eds.), Electronic collaborators: Learner-centered technologies for literacy, apprenticeship, and discourse. New York: Erlbaum, (pp. 25–50).

Bullen, M. (1999). Participation and critical thinking in online university distance education. Journal of Distance Education,13(2). Retrieved April 20, 2006, from http://cade.athabascau.ca/vol13.2/bullen.html

Chandler , D. (1996). Engagement with media: Shaping and being shaped. Computer-Mediated Communication Magazine, February. Retrieved September 1, 2006 from http://users.aber.ac.uk/dgc/determ.html

Clark, H. (1996). Using language. Cambridge University Press.

Clark H., & Brennan, S. (1991). Grounding in communication. In L.B. Resnick, J. Levine & S. Teasley, (Eds). Perspectives on socially shared cognition. Washington DC, APA Press.

Collett, D., Kanuka, H., Blanchette, J., & Goodale, C. (1999). Learning technologies in distance education. Edmonton, AB: University of Alberta.

Cronbach, L. J., & Suppes, P. (1969). Research for tomorrow’s schools: Disciplined inquiry for education. London: Macmillan.

Davis, M., & Rouzie, A. (2002). Cooperation vs. deliberation: Computer mediated conferencing and the problem of argument in international distance education. International Review of Research in Open and Distance Learning 3(1). Retrieved September 1, 2006, from http://www.irrodl.org/content/v3.1/davis.html

De Laat, M. (2001). Network and content analysis in an online community discourse. CSCL-ware in practice. New York: Kluwer Publications.

Donald, J. (2002). Learning to think: Disciplinary perspectives. San Francisco: Jossey-Bass.

Entwhistle, N. J., & Ramsden, P. (1983). Understanding student learning. London: Croom Helm.

Gagné, R. (1965). The conditions of learning. New York: Holt Reinhart & Winston.

Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of Distance Education, 15(1), 7–23.

Gunawardena, C., Carabajal, K., & Lowe, C. A. (2001). Critical analysis of models and methods used to evaluate online learning networks. (ERIC Document Reproduction Service No. ED456159).

Gunawardena, C., Lowe, C., & Anderson, T. (1997). Analysis of a global online debate and the development of an interaction analysis model for examining social construction of knowledge in computer conferencing. Journal of Educational Computing Research, 17(4), 395–429.

Hiltz, S. R., Johnson, K., & Turoff, M. (1986). Experiments in group decision making: Disinhibition, deindividuation, and group process in pen name and real name computer conferences. Decision Support Systems, 5, 217–232.

Houle, C. O. (1961). The inquiring mind. Madison, WI: University of Wisconsin.

Ihde, D. (1979). Technics and Praxis. London: D. Reil.

Jeong, A. (2004). The effects of communication style and message function in triggering responses and critical discussion in computer-supported collaborative argumentation (http://dev22448-01.sp01.fsu.edu/Research/CommunicationStyles/ CommStyles_AllanJeong_AECT2004Proceedings.pdf). Paper in conference proceedings for the Annual meeting of the Association of Educational Communications & Technology, Chicago, IL.

Kanuka, H. (2005). An exploration into facilitating higher levels of learning in a text-based Internet learning environment using diverse instructional strategies. Journal of Computer Mediated Communication, 10(3). [online]. Retrieved September 1, 2006, from http://jcmc.indiana.edu/vol10/issue3/kanuka.html

Kanuka,, H. & Anderson, T. (1998). On-line social interchange, discord, and knowledge construction. Journal of Distance Education, 13(1), 57-74.

Lopez-Islas, (2001). A cross-cultural study of group processes and development in online conferences. Distance Education, 22(1), 85–121.

Kanuka, H., Rourke, L., & Laflamme, E. (in press, 2006). The Influence of Instructional Methods on the Quality of Online Discussion. British Journal of Educational Technology.

McGuire, T. W., Kiesler, S., & Siegel, J. (1987). Group and computer-mediated discussion effects in risk decision-making. Journal of Personality and Social Psychology, 52(5), 917–930.

McKlin, T., Harmon, S. W., Evans, W., & Jones, M. G. (2002). Cognitive presence in web based learning: A content analysis of students’ online discussions. IT Forum, 60.

McLaughlin, C., & Luca, J. (2000). Cognitive engagement and higher order thinking through computer conferencing: We know why but do we know how? Retrieved September 1, 2006, from http://www.lsn.curtin.edu.au/tlf/tlf2000/mcloughlin.html

Meyer, K. A. (2003). Face-to-face versus threaded discussions: The role of time and higher-order thinking. Journal of Asynchronous Learning Networks, 7(3), 55–65.

Nussbaum, M., Hartley, K., Sinatra, G. M., Reynolds, R. E., & Bendixen, L. D. (2002). Enhancing the quality of on-line discussions. New Orleans, LA: Paper presented at the annual meeting of the American Educational Research Association.

Pascarella, E. T., & Terenzini, P. T. (2005). How college affects students (Vol 2): A third decade of research. San Francisco: Jossey-Bass.

Pawan, F., Paulus, T. M., Yalcin, S., & Chang, C-F. (2003). Online learning: Patterns of engagement and interaction among in-service teachers. Language Learning & Technology, 7(3), 119-140. [online]. Retrieved on October 14, 2006 from http://llt.msu.edu/vol7num3/pawan/

Pena-Shaff, J. (2005). Asynchronous online discussions as a tool for learning: Students’ attitudes, expectations, and perceptions. Journal of Interactive Learning Research, 16(4), 409–430.

Pena-Shaff, J., Martin, W., & Gay, G (2001). An epistemological framework for analyzing student interactions in computer-mediated communication environments. Journal of Interactive Learning Research, 12, 41–68.

Pena-Shaff, J., & Nicholls, C. (2004). Analyzing student interactions and meaning construction in Computer Bulletin Board (BBS) discussions. Computers and Education, 42, 243–-265.

Prosser, M. & Trigwell, K. (1991). Student evaluations of teaching and courses: Student learning approaches and outcomes as criteria of validity. Contemporary Educational Psychology, 16, 269–301.

Reeves, T. C. (1995). Questioning the questions of instructional technology research. In M. R. Simonson & M. Anderson (Eds.) Proceedings of the Annual Conference of the Association for Educational Communications and Technology, Research and Theory Division, Anaheim, CA, (pp.459–470).

Reeves, T. C. (1999). Rigorous and socially responsible interactive learning research. Association for the Advancement of Computing in Education. Retrieved September 1, 2006, from http://www.aace.org/pubs/jilr/intro.html

Reid, A. (1977). Comparing the telephone with face-to-face interaction. In I. Pool Ed., The Social Impact of the Telephone, pps 386-414. Cambridge, MA: IT.

Rourke, L. (2005). Learning through online discussion. Unpublished Ph.D. Dissertation. University of Alberta,, Edmonton, Alberta, Canada.

Rovai, A., & Barnum, K. (2003). Online course effectiveness: An analysis of student interactions and perceptions of learning. Journal of Distance Education, 18(1), 57–73.

Schön. D. A. (1983). The reflective practitioner: how professionals think in action. New York: Basic Books.

Russell, T. L. (1999). The no significant difference phenomenon. Raleigh, NC: Office of Instructional Telecommunications, North Carolina State University.

Schulman, L. S. (1997). Disciplines of inquiry in education. A new overview. In Jaegar, R. M., (Ed.), Complementary methods for research in education, 2nd edition. Washington, DC: American Educational Research Association.

Straus, S. G. (1997). Technology, group process, and group outcomes: Testing the connections in computer-mediated and face-to-face groups. Human-Computer Interaction, 12(3), 227–266.

Straus, S. G., & McGrath, J. E. (1994). Does the medium matter: The interaction task and technology on group performance and member reactions. Journal of Applied Psychology, 79, 87–97.

Thomas, M. (2002). Learning within incoherent structures: The space of online discussion forums. Journal of Computer Assisted Learning, 18, 351–366.

Tough, A. (1979). The adult’s learning projects. Toronto, ON: Institute for Studies in Education.

Trigwell, K., & Prosser, M. (1991). Relating approaches to study and quality of learning outcomes at the course level. British Journal of Educational Psychology, 61, 265–275.

Vaughan, N., & Garrison, D. R. (2005). Creating cognitive presence in a blended faculty development community. Internet and Higher Education, 8(1), 1–12.

Veerman, A., Andriessen, J., & Kanselaar , G. (2000). Learning through synchronous electronic discussion. Computers & Education, 34(3–4), 269–290.

Whittaker, S. (1995). Rethinking video as a technology for interpersonal communication: Theory and design implications. International Journal of Human-Computer Studies, 42(5), 501–529.

Whittaker, S. (2003). Things to talk about when talking about things. Human Computer Interaction, 18(2), 149–170.

Whittaker, S., & O’Conaill, B. (1997). Evaluating videoconferencing. In Proceedings of CHI'93 Human Factors in Computing Systems. NY: ACM Press.

Wilson, D., Varnhagen, S., Krupa, E., Kasprzak, S., Hunting, V., & Taylor, A. (2003). Instructors’ adaptation to online graduate education in health promotion: A qualitative study. Journal of Distance Education, 18(2), 1–15.

Winner, L. (1993). Upon opening the black box and finding it empty: Social constructivism and the philosophy of technology. Science, Technology, and Human Values, 18(3):362–78.

Yakimovicz, A., & Murphy, K. L. (1995). Constructivism and collaboration on the Internet: Case study of a graduate class experience. Computers in Education, 24(3), 203–209.


ISSN: 1499-6685