Authors
Genevieve Marie Johnson teaches in the Department of Psychology and Sociology and Chair of the Research Ethics Review Committee at Grant MacEwan College, Edmonton, Alberta. Correspondence regarding this article can be sent to her at: johnsong@macewan.ca
Abstract: Online practice tests and quizzes are commonly available to higher education students. The extent of student use of such study tools and the relationship of use to achievement, however, have not been fully investigated. One hundred twelve college students were encouraged to use optional online quizzes in preparation for proctored examinations. In-class examinations included multiple choice items that assessed four cognitive domains: factual, application, comprehension, and conceptual. In general, few students made extensive use of the optional online quizzes. Analysis revealed that student use of online quizzes was associated with increased academic achievement, although it was not clear if quiz use caused achievement or achievement caused quiz use. Short-answer and true-false online quiz items were differentially associated with measures of academic achievement, suggesting that cognitive processing differed across item format. Implications for further research are provided.
Résumé: Des tests et des examens de pratique en ligne sont mis à la disposition des étudiants effectuant des études supérieures. Nous n’avons pas exploré en profondeur jusqu’à quel point les étudiants utilisent des outils d’étude semblables et quelle est la relation entre l’utilisation et la réussite. On a encouragé cent douze étudiants de l’université à utiliser les examens facultatifs en ligne en préparation des examens gérés. Les examens en salle de classe comprenaient des questions à choix multiples qui évaluaient quatre domaines cognitifs : les faits, l’application, la compréhension et les concepts. En général, peu d’étudiants ont utilisé fréquemment les tests facultatifs en ligne. Une analyse a permis de déterminer que l’utilisation par les étudiants des tests en ligne était liée à une augmentation de la réussite scolaire, bien qu’il ne soit pas clair si l’utilisation des tests est à l’origine de la réussite ou si la réussite mène à l’utilisation de tests. Les questions objectives et les questions vrai ou faux étaient associées différemment à des mesures de la réussite scolaire, ce qui laisse croire que le traitement cognitif est différent pour chaque type de question. Nous fournissons des répercussions qui pourraient servir à des recherches ultérieures.
Frequent in-class quizzes have been associated with positive learning outcomes including increased student achievement, attendance, and confidence (Ruscio, 2001; Wilder, Flood, & Stomsnes, 2001). Frequent quizzing reportedly maintains student study effort and promotes course engagement (Smith et al., 2000; Sporer, 2001). The research literature, however, does not unanimously support the achievement benefits of quizzes. For example, Haberyan (2003) provided two sections of General Biology students with weekly in-class quizzes; two equivalent sections did not access quizzes. Although students rated the quizzes favorably and believed they were helpful in preparing for in-class examination, there were no significant achievement differences across sections. Kluger and DeNisi (1997) conducted a meta-analysis on feedback interventions, including quizzes, and concluded that such feedback does not always enhance learner performance and may, in some cases, have detrimental effects. Contradictory research findings suggest simplistic approaches to complex issues (e.g., the learning benefits of quizzes may be mediated by course content and student characteristics).
Practice tests help students evaluate their learning and focus study effort accordingly (Maki, 1998). Unlike quizzes, practice tests are not included in calculation of grades but, rather, are viewed as study tools (Grabe & Sigler, 2001). College students are poor predictors of examination performance; high-achieving students are underconfident and low-achieving students are overconfident (Hacker, Bol, Horgan, & Rakow, 2000); practice tests provide students with an approximation of actual test performance. The instructional utility of practice tests, however, has been challenged (Perlman, 2003). In some cases, practice tests reveal modest correlation with criteria (Herring, 1999). Bol and Hacker (2001) assessed graduate student learning in two instructional conditions: practice tests and no practice tests. Students who used practice tests as study strategies scored significantly lower on the midterm examination in a research methods course than student without access to practice tests. The researchers concluded that students used practice tests inappropriately and that “traditional review may be a more effective strategy for enhancing performance and calibration accuracy [estimation of achievement] than practice tests” (p. 141). Thus, while popular, practice tests do not necessarily facilitate undergraduate in-class test performance.
Excellence in undergraduate teaching is associated with prompt evaluative feedback to students (Chickering & Gamson, 1999). Because online or otherwise automated quizzes provide students with immediate performance feedback (EdTech, 2005; Hutchins, 2003), they are commonly recommended to undergraduate students as tools of study (Byers, 1999; Jensen, Johnson, & Johnson, 2002; Jensen, Moore, & Hatch, 2002; Kashy, Thoennessen, Albertelli, & Tsai, 2000). Indeed, via web site or compact disc, automated practice tests characteristically accompany introductory undergraduate textbooks (for example, refer to McGraw-Hill Higher Education, 2005).
Online quizzes have been successfully implemented in university-level foreign language learning. Fritz (2003) reported a study in which university students in Spanish and French classes completed weekly online quizzes using Blackboard. Results indicated that online quizzes were viable in foreign language classes and that 10-15 minutes of class time each week became available for instruction rather than quizzes. “Instructor time was also greatly conserved since quizzes were self-correcting and self-tabulating” (p. 1). Identified disadvantages of online quizzes included the integrity of the process since students were unsupervised during quiz completion and difficulty developing valid online items (e.g., items without an auditory component). Itoh and Hannon (2002) described a collaborative effort that made online quizzes available to students learning Japanese. They suggest that “because of the convenience of online delivery, these quizzes are well suited to the needs of today’s liberal arts students who often participate in many extracurricular activities” (p. 551).
While online quizzes may be more convenient than pencil-and-paper quizzes, the issue of relative effect on student achievement is the subject of research interest and speculation. Derouza and Fleming (2003), for example, reported that students who took online practice tests academically outperformed students who took the same tests in pencil-and-paper format; immediate feedback may account for the learning advantage of students who use automated quizzes. Further, student attention may increase in response to the interface of online quizzes, which are typically more visually stimulating than pencil-and-paper formats (EdTech, 2005). Additionally, students may be more motivated to take online quizzes rather than pencil-and-paper quizzes because contemporary youth associate digital formats with recreation and leisure (Rotermann, 2001; Statistics Canada, 2004).
With regard to the learning benefits of online quizzes, research findings have been contradictory and thus inconclusive. Brothen and Wambach (2001), for example, describe a developmental psychology course in which students had access to computerized quizzes as tools to prepare for proctored examinations. Their results indicated that “spending more time taking quizzes and taking them more times was related to poorer exam performance” (p. 293), even though the in-class examinations consisted of items identical to those in the online quizzes. A possible explanation for this result is that students used the textbook to answer quiz items and erroneously interpreted high quiz scores as indicative of content mastery. Grabe and Sigler (2002), on the other hand, provided students with four online study tools: multiple choice practice test items, short answer practice test items, lecture notes, and textbook notes. Students who made use of the tools academically outperformed those who did not. Students frequently accessed multiple choice practice test items; no data was provided on the use of short answer questions because “very few students made use of this resource” (p. 379).
While the literature consistently supports the convenience of online quizzes, and there is some evidence of associated learning benefits, many questions remain. The current investigation sought to: 1) explore patterns of student use of optional online quizzes and 2) determine the relationship of such use to academic achievement. To what extent do students make use of optional online quizzes in preparation for proctored examinations? Do differences in patterns of use relate to differences in student achievement? Is student use of optional online quizzes influenced by item format (i.e., true-false versus short-answer)? Does item format relate to student achievement?
Students in four sections (40 students per section) of an undergraduate educational psychology course were encouraged to use optional WebCT quizzes in preparation for in-class examinations. Throughout the academic term, students were reminded of the availability and potential benefits of these study tools. Students had unlimited access to 28 online quizzes; 14 true-false and 14 short-answer. While every effort was made to anticipate student short-answer responses (e.g., the scoring criteria accepted both upper- and lower-case letters, hyphens and no hyphens, etc.), automated scoring was problematic (e.g., spelling errors). Since quiz scores did not contribute to final course grades and students were encouraged to consider the optional quizzes as tools of study, students determined the accuracy of short-answer entries (e.g., correct but misspelled) in terms of automated feedback provided upon quiz submission.
Drawn from instructional support materials provided by the course textbook publisher (Renaud, 2003), one true-false and one short-answer quiz reflected the content of each of 14 textbook chapters. The true-false quizzes included items that required recall of specific fact (e.g., Another term for top-down processing is feature analysis) as well as higher-level thinking skills (e.g., Metacognition primarily involves conditional knowledge). The short-answer items were scored according to specific criteria entered into the WebCT Quiz Tool. In most cases, such scoring limited items to recall of specific fact (e.g., The process that occurs when remembering information is hampered by the presence of other information is called__ ). The number of items in each quiz varied from 18 to 36 and students accessed all items, in the same order, every time they opened the quiz in WebCT. All quizzes were available from the first day of the course until completion of the final examination.
Toward the end of the academic term, students in the four sections of the educational psychology course were invited to participate in the study by allowing their course examination marks and WebCT records to be used for research purposes. Course numbers decreased over the term due to withdrawals as well as absenteeism. In the end, 112 students participated in the study. These students ranged in age from 18 to 33 years (mean 21.1 years). Approximately 77% of the sample was female which is characteristic of the student population in the participating college. Participants reported an average of 18 college credits complete (range 0 to 147). With regard to intended plans for Bachelor of Education degree completion, 48.7% of participants were focused on elementary education, 37.6% on secondary education, 6.8% were undecided, and data were missing for 6.8% of the students.
To address the research questions, two student variables were measured: 1) use of optional online quizzes and 2) academic achievement.
Student Use of Optional Online Quizzes
Given that quizzes were optional and unlimited, quiz marks were not valid measures of student use of these study tools. That is, students may have taken a quiz prior to studying to determine areas of weakness; other students, given that marks were not recorded, may have submitted a quiz prior to responding to all items. Consequently, student use of optional online quizzes was coded as a binary variable (use/no-use) for each of the 28 quizzes. Use/no-use scores were obtained for each participating student for each true-false and short-answer quiz.
Student academic achievement
Student achievement was measured with the multiple choice test items on three proctored in-class midterm examinations and one final examination. The midterm examinations were not cumulative, and each assessed student knowledge of a relatively limited amount of course material. The final examination was cumulative, and assessed mastery of all course content. Each midterm examination contained 24 multiple choice items and the final examination contained 80 multiple choice items (36 items assessed previously tested material and 44 items assessed material subsequent to the third midterm examination). While the midterm and final examinations included case study analyses, due to subjective marking, these items were not included in any metric of student achievement. Multiple choice items were evenly distributed across four cognitive domains (i.e., factual, application, comprehension, conceptual) as specified in the test item bank provided by the textbook publisher (Renaud, 2003). Each domain included 38 examination items (i.e., six on each of three midterm examinations and 20 on the final examination). Correct responses for each type of multiple-choice proctored test item were summed across all examinations. The result was four objective measures of student academic achievement, that is, four scores that reflected the total number correct on factual, application, comprehension, and conceptual in-class multiple-choice test items. Table 1 presents a description of these measures of academic achievement for the group of participating college students.
Table 1: Descriptive Statistics for Measures of Student Academic Achievement
Descriptive statistics provide an indication of the extent of student use of the optional online quizzes. Pearson Product Moment correlational analysis determined the relationship between student use of optional online quizzes and academic achievement. Further, students were grouped on the basis of extent of quiz use (i.e., no-use, low-use, moderate-use, high-use) and group achievement differences were examined using an analysis of variance.
As presented in Table 2, it was determined that students made limited use of the optional online quizzes. Approximately two-thirds of participating college students submitted at least one optional quiz during the academic term. The average number of true-false quizzes submitted was 3.7 out of 14 available; the average number of short-answer quizzes submitted was 2.7 out of 14 available. Given that the majority of students submitted at least one quiz during the academic term, access and navigation were not likely concerns. While approximately two-thirds of the students appeared to check out the online quizzes, few appeared to systematically use the optional online quizzes as tools of study.
Table 2: Student Use of Optional Online Quizzes
Table 3 presents correlations between the number of quizzes that students submitted and the four measures of academic achievement. As student use of optional online quizzes increased, academic achievement tended to increase. In general, short-answer quiz items related more strongly to measures of student achievement than did true-false items. It may be that short-answer quiz items, which required the actual input of words, engaged students in course content at a deeper level than simply checking true or false. However, no significant relationship emerged between number of optional short-answer online quizzes submitted and student achievement as measured by conceptual test items. Primarily factual short-answer quiz items may not have facilitated the cognitive processing necessary to respond to conceptual measures of learning (i.e., multiple choice items such as: Which type of attribution would best reflect learned helplessness). Indeed, true-false quiz items can demand complex levels of cognitive processing (e.g., quiz items such as: Compared to those with field-dependence, people with field-independence are more likely to function well in a social setting, and prefer subjects such as history). To summarize, student use of optional online quizzes was associated with academic achievement but quiz format differentially related to achievement across cognitive domains.
Table 3: Correlations between Use of Optional Online Quizzes and Academic Achievement
To further analyze the relationship between use of optional online quizzes and academic achievement, students were categorized on the basis of extent of quiz use. Students who did not submit a quiz throughout the academic term were grouped and labelled as no-use. As presented in Table 4, it was determined that 43 students did not submit a true-false quiz and 31 students did not submit a short-answer quiz. Given that 14 true-false and 14 short-answer quizzes were available, low-use students were defined as those who submitted between one and four quizzes, moderate-use students were those who submitted between five and nine quizzes, and high-use students were those who submitted between 10 and 14 true-false and short-answer quizzes. Such categorization allowed for comparison of group means on the four measures of academic achievement.
Table 4: Student Use of Optional Online Quizzes
As presented in Table 5, analysis of variance revealed a pattern of achievement associated with levels of student use of true-false online quizzes. Student achievement on factual and application in-class examination items was not significantly related to variation in use of optional online true-false quizzes. These two cognitive categories of multiple choice in-class examination items had the highest achievement averages (refer to Table 1). In-class examination items that students found less challenging were not related to online quiz use. Comprehension and conceptual in-class test items appeared challenging for students (means of 23.7/38 and 18.5/38, respectively) and were significantly related to use of true-false online quizzes. Completion of true-false quizzes may have facilitated the cognitive processing necessary to answer challenging examination questions; conversely, students with higher-level thinking skills may have gravitated toward the challenges of the true-false online quizzes. In the case of in-class comprehension test items, only high-use was associated with increased student achievement; students who submitted more than 9 of 14 true-false online quizzes had a mean of 37.4/38 on challenging in-class comprehension examination items (class mean of 23.7/38). However, both moderate and high-use of true-false quizzes differentiated student achievement as measured by in-class conceptual examination items (means of 20.1/38 and 20.6/38, respectively). Perhaps, even moderate-use of online true-false quizzes is related to enhanced student cognitive competencies or, conversely, cognitively competent students submitted more than 4 of 14 optional true-false online quizzes in preparation for in-class examinations.
Table 5: Analysis of Variance: Online True-False Quiz Use and Academic Achievement
As presented in Table 6, a different pattern emerged from analysis of variance for student use of optional short-answer online quizzes. On all measures of in-class achievement (i.e., factual, application, comprehension, and conceptual proctored examination items), students who submitted more than 9 of 14 short-answer online quizzes outperformed students who made lesser use of the study tool. Online short-answer quizzes may have enhanced student learning or, alternatively, high achieving college students may generally capitalize on available study tools. Use of optional online quizzes was not incrementally related to student achievement. As also suggested by patterns of true-false quiz use (presented in Table 5), students who made no-use of optional online short-answer quizzes outperformed students who made low-use of that study tool. For example, on the in-class conceptual examination items, students who did not submit a single optional short-answer quiz during the academic term achieved a mean of 19.4; students who submitted a few short-answer online quizzes achieved a mean of 17.3 on the same 38 in-class multiple choice examination items. It may be that students who did not use optional online quizzes engaged in other study strategies. Student who made limited use of optional online quizzes may have been characterized by generally disorganized study behaviour.
Table 6: Analysis of Variance: Online Short-Answer Quiz Use and Academic Achievement
Do optional online quizzes cause increased student achievement? Such a question can only be answered via true experimental research design (Johnson & Howell, 2005). Random assignment of students to one of two instructional conditions (i.e., quizzes and no quizzes) is not possible in the ecologically valid context of existing college classes. Thus, while the current investigation has limitations, the results from this study do provide direction for future research.
First, when provided with optional online quizzes, about two-thirds of college students accessed such support for learning. Approximately 10% of students submitted all or almost all of the available online quizzes. It appears that the majority of students were largely unmotivated to use the online quizzes. Future research may clarify the reasons that students fail to use online practice quizzes to prepare for in-class examinations. Several questions remain. Does the extent of online quiz use increase or decrease during the academic term? Do online quiz scores influence continued use? Do in-class examination results influence use of online quizzes?
Second, student use of optional online quizzes was associated with increased academic achievement, although it is not clear if use caused achievement or achievement caused use. Future research may clarify the specific mechanisms that link academic achievement with use of online practice quizzes. Perhaps a reciprocal exchange occurs. That is, effective learners capitalize on available study resources which cause increased achievement which causes further commitment to access study tools which causes increased achievement and so on.
Third, different patterns of achievement were associated with different types of optional online quiz item types. Student submission of true-false online quizzes was associated with achievement as measured by in-class examination items that required high-level thinking skills (i.e., comprehension and conceptualization test items) but not with in-class examinations items that required concrete types of thinking (i.e., recall of fact and practical application test items). Student submission of short-answer online quizzes was associated with all cognitive categories of student achievement (i.e., factual, application, comprehension, and conceptual). While the textbook test item bank indicated cognitive domain for multiple choice items, no such categorization was provided for the true-false and short-answer items. Visual inspection of online quiz items revealed that short-answer items primarily assessed factual knowledge while true-false items assessed the full range of cognitive domains (i.e., factual, application, comprehension, and conceptual). Subsequent investigation may more carefully align the cognitive demands of in-class examination items with the cognitive demands of optional online quiz items.
Fourth, the current investigation was constrained by the WebCT Quiz Tool and the optional nature of the online quizzes. Student use of an online quiz was operationalized as any mark recorded for that student in the WebCT Grades Tool (i.e., quiz scores are automatically recorded). When a student submitted a quiz but answered no item or no item correctly, the Grades Tool recorded a score of zero for that student for that quiz. Such a metric of student use of the optional online quizzes provided a reliable index of access but not a valid measure of actual use. Future research may seek to measure student use in terms of quiz scores or proportion of quiz items attempted, although such approaches are not without compromises. For example, in recent data collection (Johnson & Johnson, 2005) online quizzes are not optional (i.e., each quiz contributes 1% to the final course grade). While making the quizzes mandatory will likely result in a more accurate indication of actual use, the generalization of such findings may still be limited.
Finally, a variety of study strategies are available to undergraduate students (Dembo, 2004); the current investigation focused only on automated optional quizzes in the WebCT environment. A deeper understanding of undergraduate students’ patterns of study behavior is required; particularly the influence of various patterns of online study on academic achievement (Johnson & Johnson, 2005), and whether optional online quizzes serve students well. Reportedly, cognitive characteristics such as learning style mediate study strategy effectiveness (Dunn & Stevensen, 1997; Skogsberg & Clump, 2003). Online study tool effectiveness may also be mediated by individual difference variables. Indeed, recently collected data revealed that student learning style differentially affects achievement under two online study conditions; individual quizzes versus cooperative study groups (Johnson, under-review).
The current investigation explored patterns of student use of optional online quizzes and determined that there is a relationship between such use and academic achievement. Questions remain as to what motivates students to use online quizzes, which question formats tend to best support student learning, and how the use of online practice quizzes fits into the range of effective study methods used by students throughout a course.
Bol, L., & Hacker, D. J. (2001). A comparison of the effects of practice tests and traditional review on performance and calibration. Journal of Experiential Education, 69, 133-151.
Brothen, T., & Wambach, C. (2001). Effective student use of computerized quizzes. Teaching of Psychology, 28, 292-294.
Byers, J. A. 1999. Interactive learning using expert system quizzes on the Internet. Educational Media International 36, 191-194.
Chickering, A. W., & Gamson, Z. F. (1999). Development and adaptations of the seven principles for good practice in undergraduate education. New Directions for Teaching and Learning, 80, 75-81.
Dembo, M. H. (2004). Motivation and learning strategies for college success: A self-management approach. Mahwah, NJ: Lawrence Erlbaum.
Derouza, E., & Fleming, M. (2003). A comparison of in-class quizzes vs. online quizzes on student exam performance. Journal of Computing in Higher Education, 14, 121-134.
Dunn, R., & Stevenson, J. M. (1997). Teaching diverse college students to study within a learning-styles prescription. College Student Journal, 31, 333-339.
EdTech. (2005). Online quizzing. Effective use of online course tools. Available at http://www.edtech.vt.edu/edtech/id/ocs/quizzes.html
Fritz, K. M. (2003). Using Blackboard 5 to deliver both traditional and multimedia quizzes on-line for foreign language classes. ERIC Document Reproduction No. ED482584.
Grabe M., & Sigler, E. (2001). Studying online: Evaluation of an online study environment. Computers and Education, 38, 375-383.
Haberyan, K. A. (2003). Do weekly quizzes improve student performance on General Biology Exams? The American Biology Teacher, 65, 110-114.
Hacker, D. J., Bol, L., Horgan, D., & Rakow, E. (2000). Test prediction and performance in a classroom context. Journal of Educational Psychology, 92, 160-170.
Herring, W. (1999). Use of practice tests in the prediction of GED test scores. Journal of Correctional Education, 50, 6-8.
Hutchins, H. M. (2003). Instructional immediacy and the seven principles: Strategies for facilitating online courses. Online Journal of Distance learning Administration, 6. Retrieved March 12, 2005, from http://www.westga.edu/~distance/ojdla/fall63/hutchins63.html
Itoh, R., & Hannon, C. (2002). The effect of online quizzes on learning Japanese. CALICO Journal, 19, 551-561.
Jensen, M., Johnson, D. W., Johnson, R. T. (2002). Impact of positive interdependence during electronic quizzes on discourse and achievement. Journal of Educational Research, 95, 161-167.
Jensen, M., Moore, R., & Hatch, J. (2002). Electronic cooperative quizzes. American Biology Teacher, 64, 169-174.
Johnson, G. M. (under-review). Learning style under two online study conditions: An ABAB analysis. British Journal of Educational Psychology.
Johnson, G. M., & Howell, A. J. (2005). The impact of Internet learning technology: Experimental methods of determination. In B. L. Mann (Ed.), Selected styles in web-based educational research (pp. 279-298). Hershey, PA: Idea Group Publishing.
Johnson, G., & Johnson, J. (2005). Online study groups: Comparison of two strategies. In Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2005 (pp. 2025-2030). Norfolk, VA: AACE.
Kashy, E. K., Thoennessen, M., Albertelli, G., & Tsai, Y. (2000). Implementing a large-scale on-campus ALN: Faculty perspective. Journal of Asynchronous Learning Networks, 4. Retrieved March 12, 2005, from http://www.aln.org/publications/jaln/v4n3/index.asp
Kluger, A. N., & DeNisi, A. (1997). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119, 254-284.
Maki, R. H. (1998). Test predictions over text material. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Metacognition in educational theory and practice (pp. 1187-144). Mahwah, NJ: Erlbaum.
McGraw-Hill Higher Education. (2005) Educational Psychology Online Learning Center. Available at http://highered.mcgraw-hill.com/sites/0070909695/student_view0/index.html
Perlman, C. L. (2003). Practice tests and study guides: Do the help? Are they ethical? What is ethical test preparation practice? ERIC Document Reproduction No. ED480062
Renaud, R. (2003). Test item file for Educational Psychology (2 nd Canadian Edition). Toronto, ON: Pearson Education Canada.
Rotermann (2001). Wired young Canadians. Canadian Social Trends, 63, 4-8.
Ruscio, J. (2001). Administering quizzes at random to increase students’ reading. Teaching of Psychology, 28, 204-206.
Smith, J. L., Brooks, P. J., Moore, A. B., Ozburn, W., Marquess, J., & Horner, E. (2000). Course management software and other technologies to support collaborative learning in nontraditional Pharm. D. courses. Interactive Multimedia Electronic Journal of Computer-Enhanced Learning, 2. Available at http://imej.wfu.edu/articles/2000/1/05/index.asp
Skogsberg, K., & Clump, M. (2003). Do psychology and biology majors differ in their study processes and learning styles? College Student Journal, 37, 27-33.
Sporer, R. (2001). The no-fault quiz. College Teaching, 49, 61.
Statistics Canada. (2004). Household Internet Use Survey. Available at http://www.statcan.ca/Daily/English/040708/d040708a.htm
Wilder, D. A., Flood, W. A. & Stromsnes, W. (2001). The use of random extra credit quizzes to increase student attendance. Journal of Instructional Psychology, 28, 117-120.
© Canadian Journal of Learning and Technology