Résumés
Abstract
As online courses become more common, practitioners are in need of clear guidance on how to translate best educational practices into web-based instruction. Moreover, student engagement is a pressing concern in online courses, which often have high levels of dropout. Our goals in this work were to experimentally study routine instructional design choices and to measure the effects of these choices on students’ subjective experiences (engagement, mind wandering, and interest) in addition to objective learning outcomes. Using randomized controlled trials, we studied the effect of varying instructional activities (namely, assessment and a step-through interactive) on participants’ learning and subjective experiences in a lesson drawn from an online immunology course. Participants were recruited from Amazon Mechanical Turk. Results showed that participants were more likely to drop out when they were in conditions that included assessment. Moreover, assessment with minimal feedback (correct answers only) led to the lowest subjective ratings of any experimental condition. Some of the negative effects of assessment were mitigated by the addition of assessment explanations or a summary interactive. We found no differences between the experimental conditions in learning outcomes, but we did find differences between groups in the accuracy of score predictions. Finally, prior knowledge and self-rated confusion were predictors of post-test scores. Using student behavior data from the same online immunology course, we corroborated the importance of assessment explanations. Our results have a clear implication for course developers: the addition of explanations to assessment questions is a simple way to improve online courses.
Keywords:
- assessment,
- feedback,
- affect,
- confusion,
- online course design
Veuillez télécharger l’article en PDF pour le lire.
Télécharger
Parties annexes
Bibliography
- Agarwal, P. K., Karpicke, J. D., Kang, S. H. K., Roediger, H. L., & McDermott, K. B. (2008). Examining the testing effect with open-and closed-book tests. Applied Cognitive Psychology, 22(7), 861-876. doi: https://doi.org/10.1002/acp.1391
- Cabeza, R., Kapur, S., Craik, F. I. M., McIntosh, A. R., Houle, S., & Tulving, E. (1997). Functional neuroanatomy of recall and recognition: A PET study of episodic memory. Journal of Cognitive Neuroscience, 9(2), 254-265. doi: https://doi.org/10.1162/jocn.1997.9.2.254
- Casillas, A., Robbins, S., Allen, J., Kuo, Y.-L., Hanson, M. A., & Schmeiser, C. (2012). Predicting early academic failure in high school from prior academic achievement, psychosocial characteristics, and behavior. Journal of Educational Psychology, 104(2), 407-420. doi: https://doi.org/10.1037/a0027180
- Chen, Z., Chudzicki, C., Palumbo, D., Alexandron, G., Choi, Y.-J., Zhou, Q., & Pritchard, D. E. (2016). Researching for better instructional methods using AB experiments in MOOCs: results and challenges. Research and Practice in Technology Enhanced Learning, 11(1). doi: https://doi.org/10.1186/s41039-016-0034-4
- Clark, R. C., & Mayer, R. E. (2011). e-Learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning. San Francisco, CA: John Wiley & Sons.
- D'Antoni, L., Kini, D., Alur, R., Gulwani, S., Viswanathan, M., & Hartmann, B. (2015). How can automatic feedback help students construct automata? ACM Transactions on Computer-Human Interaction, 22(2). doi: https://doi.org/10.1145/2723163
- Daradoumis, T., Bassi, R., Xhafa, F., & Caballé, S. (2013). A review on massive e-learning (MOOC) design, delivery and assessment. In 2013 Eighth International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC) (pp. 208-213). doi: https://doi.org/10.1109/3PGCIC.2013.37
- Ebben, M., & Murphy, J. S. (2014). Unpacking MOOC scholarly discourse: A review of nascent MOOC scholarship. Learning, Media and Technology, 39(3), 328-345. doi: https://doi.org/10.1080/17439884.2013.878352
- Ferguson, R., & Clow, D. (2015). Examining engagement: Analysing learner subpopulations in massive open online courses (MOOCs). In Proceedings of the Fifth International Conference on Learning Analytics and Knowledge (pp. 51-58). New York, NY, USA: ACM. doi: https://doi.org/10.1145/2723576.2723606
- Friedman, M., Liu, J., Morrissey, M., Turkay, S., & Wong, T. (2015). ChinaX course report [PDF]. Retrieved from http://harvardx.harvard.edu/files/harvardx/files/chinax_course_report.pdf
- Geiser, S., & Santelices, M. V. (2007). Validity of high-school grades in predicting student success beyond the freshman year: high-school record vs. standardized tests as indicators of four-year college outcomes [PDF]. Retrieved from http://eric.ed.gov/?id=ED502858
- Gog, T. van, & Sweller, J. (2015). Not new, but nearly forgotten: The testing effect decreases or even disappears as the complexity of learning materials increases. Educational Psychology Review, 27(2), 247-264. doi: https://doi.org/10.1007/s10648-015-9310-x
- Goldberg, L. R., Bell, E., King, C., O'Mara, C., McInerney, F., Robinson, A., & Vickers, J. (2015). Relationship between participants' level of education and engagement in their completion of the Understanding Dementia Massive Open Online Course. BMC Medical Education, 15. doi: https://doi.org/10.1186/s12909-015-0344-z
- Greene, J. A., Oswald, C. A., & Pomerantz, J. (2015). Predictors of retention and achievement in a massive open online course. American Educational Research Journal, 52(5), 925-955. doi: https://doi.org/10.3102/0002831215584621
- Guo, P. J., Kim, J., & Rubin, R. (2014). How video production affects student engagement: An empirical study of MOOC videos. In Proceedings of the First ACM Conference on Learning @ Scale Conference (pp. 41-50). New York, NY, USA: ACM. doi: https://doi.org/10.1145/2556325.2566239
- Hansch, A., Hillers, L., McConachie, K., Newman, C., Schildhauer, T., & Schmidt, P. (2015). Video and online learning: Critical reflections and findings from the field (SSRN Scholarly Paper No. ID 2577882). Rochester, NY: Social Science Research Network. Retrieved from http://papers.ssrn.com.ezp-prod1.hul.harvard.edu/abstract=2577882
- Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81-112. doi: https://doi.org/10.3102/003465430298487
- Kang, S. H. K., McDermott, K. B., & Roediger, H. L. (2007). Test format and corrective feedback modify the effect of testing on long-term retention. European Journal of Cognitive Psychology, 19(4-5), 528-558. doi: https://doi.org/10.1080/09541440601056620
- Kizilcec, R. F., Piech, C., & Schneider, E. (2013). Deconstructing disengagement: Analyzing learner subpopulations in massive open online courses. In Proceedings of the Third International Conference on Learning Analytics and Knowledge (pp. 170-179). New York, NY, USA: ACM. doi: https://doi.org/10.1145/2460296.2460330
- Litman, L., Robinson, J., & Abberbock, T. (2016). TurkPrime.com: A versatile crowdsourcing data acquisition platform for the behavioral sciences. Behavior Research Methods, 49(433), 1-10. doi: https://doi.org/10.3758/s13428-016-0727-z
- Lütticke, R. (2004). Problem solving with adaptive feedback. In P. M. E. D. Bra & W. Nejdl (Eds.), Adaptive hypermedia and adaptive web-based systems (pp. 417-420). Berlin,, Springer-Verlag. Retrieved from http://link.springer.com.ezp-prod1.hul.harvard.edu/chapter/10.1007/978-3-540-27780-4_64
- Margaryan, A., Bianco, M., & Littlejohn, A. (2015). Instructional quality of massive open online courses (MOOCs). Computers & Education, 80, 77-83. doi: https://doi.org/10.1016/j.compedu.2014.08.005
- Paolacci, G., & Chandler, J. (2014). Inside the Turk understanding mechanical Turk as a participant pool. Current Directions in Psychological Science, 23(3), 184-188. doi: https://doi.org/10.1177/0963721414531598
- Pursel, B. k., Zhang, L., Jablokow, K. W., Choi, G. W., & Velegol, D. (2016). Understanding MOOC students: Motivations and behaviours indicative of MOOC completion. Journal of Computer Assisted Learning, 32(3), 202-217. doi: https://doi.org/10.1111/jcal.12131
- Pyc, M. A., & Rawson, K. A. (2010). Why testing improves memory: Mediator effectiveness hypothesis. Science, 330(6002), 335-335. doi: https://doi.org/10.1126/science.1191465
- Reich, J. (2014, December). MOOC completion and retention in the context of student intent. Educause Review. Retrieved from http://er.educause.edu/articles/2014/12/mooc-completion-and-retention-in-the-context-of-student-intent
- Roediger, H. L., & Karpicke, J. D. (2006). The power of testing memory: Basic research and implications for educational practice. Perspectives on Psychological Science, 1(3), 181-210. doi: https://doi.org/10.1111/j.1745-6916.2006.00012.x
- Saul, C., & Wuttke, H.-D. (2011). Feedback personalization as prerequisite for assessing higher-order thinking skills. European Journal of Open, Distance and E-Learning, 14(2). Retrieved from http://www.eurodl.org/?p=special&sp=articles&inum=3&abstract=442&article=445
- Shi, J., Wood, W. B., Martin, J. M., Guild, N. A., Vicens, Q., & Knight, J. K. (2010). A diagnostic assessment for introductory molecular and cell biology. CBE Life Sciences Education, 9(4), 453-461. doi: https://doi.org/10.1187/cbe.10-04-0055
- Szpunar, K. K., Jing, H. G., & Schacter, D. L. (2014). Overcoming overconfidence in learning from video-recorded lectures: Implications of interpolated testing for online education. Journal of Applied Research in Memory and Cognition, 3(3), 161-164. doi: https://doi.org/10.1016/j.jarmac.2014.02.001
- Szpunar, K. K., Khan, N. Y., & Schacter, D. L. (2013). Interpolated memory tests reduce mind wandering and improve learning of online lectures. Proceedings of the National Academy of Sciences, 110(16), 6313-6317. doi: https://doi.org/10.1073/pnas.1221764110
- Türkay, S. (2016). The effects of whiteboard animations on retention and subjective experiences when learning advanced physics topics. Computers & Education, 98, 102-114. doi: https://doi.org/10.1016/j.compedu.2016.03.004
- Wang, Y., & Baker, R. (2015). Content or platform: Why do students complete MOOCs? Journal of Online Learning and Teaching, 11(1).