Résumés
Résumé
Les évaluations de l’enseignement par les étudiants et étudiantes sont utilisées à des fins académiques et formatives, mais sont également évoquées d’une manière sommative lors de décisions liées à la carrière professorale. Dans le cadre de la présente recherche, deux objectifs sont poursuivis. Le premier objectif est de déterminer les qualités psychométriques de l’instrument utilisé à l’Université du Québec à Rimouski. Avec l’avènement des technologies de l’information et de la communication, les conditions dans lesquelles se déroulent ces évaluations ont été modifiées. Le second objectif est de vérifier les impacts de la modalité de passation en ligne vs papier sur les qualités psychométriques de l’instrument. Le corpus de données est constitué des évaluations réalisées durant les années 2007- 2008 (20245 évaluations) et 2010-2011 (16432 évaluations). Les résultats attestent qu’un modèle hiérarchique décrit mieux la structuration du questionnaire que des modèles alternatifs, que la modalité de passation a une influence sur le taux de réponses et, dans certains cas, sur le taux de satisfaction des enseignements, mais qu’elle n’influence ni la fiabilité ni la dimensionnalité des évaluations. Les implications de ces résultats sont discutées.
Mots-clés :
- évaluation de l’enseignement,
- modalités,
- en ligne vs papier,
- validité
Abstract
Students’ evaluations of teaching effectiveness are used for academic and formative purposes, but are also sometimes evoked, in a summative way, for some faculty career decisions. In the present study, two main objectives are pursued. The first objective is to determine the psychometric properties of the instrument in use at the Université du Québec à Rimouski. With the advent of information and communication technologies, the conditions in which the evaluations take place have been changed from paper to online processes. so, the second objective is to verify the impacts of these online vs paper processes on the psychometric properties of the instrument. The data originated from the evaluations gathered during the 2007-2008 (20,245 evaluations) and 2010-2011 (16,432 evaluations) academic years. The results attest that a hierarchical model better describes the data than alternative models, that online vs paper processes have an influence on response rates and, in some cases, on the satisfaction level with teaching, but they have no impact on the fiability and the dimensionnality of the evaluations. The implications of these results are discussed.
Keywords:
- students’evaluations of teaching effectiveness,
- online vs paper processes,
- validity
Resumo
As avaliações do ensino pelos estudantes são utilizadas para fins académicos e formativos,mas igualmente evocados de uma maneira sumativa no momento dasdecisões ligadas à carreira docente. No quadro da presente investigação, são seguidosdois objetivos. O primeiro objetivo é determinar as qualidades psicométricasdo instrumento utilizado na Universidade do Quebeque em Rimouski. Com oadvento das tecnologias da informação e da comunicação, as condições nas quaisse desenvolvem estas avaliações foram modificadas. O segundo objetivo é verificaros impactos da modalidade de administração online vs. papel sobre as qualidadespsicométricas do instrumento. O corpo de dados é constituído por avaliaçõesrealizadas durante os anos 2007-2008 (20,245 avaliações) e 2010-2011(16,432 avaliações). Os resultados atestam que um modelo hierárquico descrevemelhor a estruturação dos questionários que os modelos alternativos, que o modode administração tem uma influência sobre a taxa de respostas e, em certos casos,sobre a taxa de satisfação do ensino, mas que não tem influência nem na fiabilidadenem na dimensionalidade das avaliações. As implicações destes resultadossão discutidas.
Palavras chaves:
- avaliação do ensino,
- modalidades,
- online vs. papel,
- validade
Parties annexes
Références
- Abrami, P. C., d’Apollonia, S., & Rosenfield, S. (2007). The dimensionality of student ratings of instruction: What we know and what we do not. In R. P. Perry & J. C. Smart (Eds.), The scholarship of teaching and learning in higher education: An evidencebased perspective (pp. 385-445). Berlin: Springer.
- Abrami, P. C., Rosenfield, S., & Dedic, H. (2007). The dimensionality of student ratings of instruction: an update on what we know and what we do not. In R. P. Perry & J.
- C. Smart (Eds.), The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 446-456). Berlin: Springer.
- American Educational Research Association (1999). Standards for educational and psychological testing. Washington, D.C.: American Educational Research Association.
- Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York: Guilford.
- Brunner, M., Nagy, G., & Wilhelm, O. (à paraitre). A tutorial on hierarchically structured constructs. Journal of Personality.
- Byrne, M., & Flood, B. (2003). Assessing the teaching quality of accounting programmes : an evaluation of the Course Experience Questionnaire. Assessment & Evaluation in Higher Education, 28 (2), 135-145. doi:10.1080/02602930301668
- Cates, W. M. (1993). A small-scale comparison of the equivalence of paper-and-pencil and computerized versions of student end-of-course evaluations. Computers in Human Behavior, 9, 401-409. doi:10.1016/0747-5632(93)90031-M
- Cohen, E. (2005). Student evaluations of course and teacher: factor analysis and ssa approaches. Assessment & Evaluation in Higher Education, 30 (2), 123-136. doi:10.1080/0260293042000264235
- Crews, T. B., & Curtis, D. F. (2011). Online course evaluations: Faculty perspective and strategies for improved response rates. Assessment & Evaluation in Higher Education, 36 (7), 865-878. doi:10.1080/02602938.2010.493970
- d’Apollonia, S., & Abrami, P. C. (1997). Navigating student ratings of instruction. American Psychologist, 52 (11), 1198-1208. doi:10.1037//0003-066X.52.11.1198
- De Ketele, J. M., & Gerard, F. M. (2005). La validation des épreuves d’évaluation selon l’approche par compétences. Mesure et évaluation en éducation, 28 (3), 1-26.
- Dommeyer, C. J., Baum, P., Hanna, R. W., & Chapman, K. S. (2004). Gathering faculty teaching evaluations by in-class and on-line surveys: their effects on response rates and evaluations. Assessment & Evaluation in Higher Education, 29 (5), 611-623. doi:10.1080/02602930410001689171
- Donovan, J., Mader, C. E., & Shinsky, J. (2006). Constructive student feedback: on-line vs. traditional course evaluations. Journal of Interactive Online Learning, 5 (3), 283-296.
- Donovan, J., Mader, C. E., & Shinsky, J. (2007). Online vs. traditional course evaluation formats: students perceptions. Journal of Interactive Online Learning, 6 (3), 158-180.
- Feldman, K. A. (1976). The superior college teacher from the students’ view. Research in Higher Education, 5, 243-288. doi:10.1007/BF00991967
- Fontaine, S. (2009). Des expériences actuelles d’évaluation des enseignements vers des démarches adaptées aux 2e et 3e cycles. In M. Romainville & C. Coggi (Eds.), L’évaluation de l’enseignement par les étudiants: Approches critiques et pratiques innovantes (pp. 123-144). Bruxelles: De Boeck.
- Grim Fidelman, C. (2007). Course evaluation surveys: in-class paper surveys versus voluntary online surveys (unpublished doctoral dissertation). Boston College university, Boston.
- Harvey, L. (2009). L’échafaudage lors de la supervision en milieu professionnel: études des modalités et un modèle. Mesure et évaluation en éducation, 32 (1), 55-83.
- Harvey, L. (2011). Hidden Markov models and learning in authentic situations. Tutorials in Quantitative Methods for Psychology, 7 (2), 12-21.
- Helie, S. (2006). an introduction to model selection: tools and algorithms. Tutorials in Quantitative Methods for Psychology, 2 (1), 1-10.
- Hess, M., Barron, A. E., Carey, L., Hilbelink, A., Hogarty, K., Kromrey, J. D., Phan, H., & Schullo, S. (2005). From the learners’ eyes: student evaluation of online instruction. Proceedings of the National Educational Computing Conference, 26, 1-23.
- Laveault, D., & Gregoire, J. (2002). Introduction aux théories des tests en psychologie et en sciences de l’éducation (2e ed.). Bruxelles: De Boeck.
- Layne, B. H., DeCristoforo, J. H., & McGinty, D. (1999). Electronic versus traditionnal student ratings of instruction. Research in Higher Education, 40 (2), 221-232. doi:10.1023/a:1018738731032
- Liegle, J., & McDonald, D. S. (2004). Lessons learned from on-line vs. paper-based computer information students’ evaluation system. In D. Colton, J. tastle, M. Hensel, & A. Amjad addullat (Eds.), The proceedings of the 21st annual information systems education conference, IsECon (pp. 1-12). Newport: AITP Foundation for Information Technology Education.
- Louis, R., Jutras, F., & Hensler, H. (1996). Des objectifs aux compétences, l’évaluation de la formation initiale des maitres. Revue canadienne de l’éducation, 21 (4), 414-432. doi:10.2307/1494894
- Linn, R. L., & Miller, M. D. (2005). Measurement and assessment in teaching (9e eds). Colombus: Prentice Hall.
- Marsh, H. W. (1983). Multidimensional ratings of teaching effectiveness by students from different academic settings and their relation to student/course/instructor characteristics. Journal of Educational Psychology, 75 (1), 150-166. doi:10.1037//0022-0663.75.1.150
- Marsh, H. W. (1984). students’evaluation of university teaching: Dimensionality, reliability, validity, potential biases, and utility. Journal of Educational Psychology, 76 (5), 707-764. doi:10.1037//0022-0663.76.5.707
- Marsh, H. W., & Bailey, H. (1993). Multidimensional students’ evaluations of teaching effectiveness: a profile analysis. Journal of Higher Education, 64 (1), 1-18. doi:10.2307/2959975
- Marsh, H. W., & Dunkin, M. J. (1992). Students’ evaluations of university teaching: a multidimensional perspective. In J. smart (Ed.), Higher education: Handbook of theory and research (vol. 8, pp. 143-233). new york: agathon.
- Monsen, S., Woo, W., Mahan, C., & Miller, G. (2005). Online course evaluations: Lessons learned. Presentation at the CALI Conference for Law school Computing, Chicago, Illinois.
- Poissant, H. (1996). L’évaluation de l’enseignement universitaire. Paris: Logiques.
- Raykov, T. (2004). Behavioral scale reliability and measurement invariance evaluation using latent variable modeling. Behavior Therapy, 35, 299-331. doi:10.1016/s0005-7894(04)80041-8
- Reid, I. C. (2001). Reflections on using Internet for the evaluation of course delivery. Internet and Higher Education, 4, 61-75. doi:10.1016/s1096-7516(01)00048-3
- Romainville, M. (2009). Une expérience collective de critères de qualité. In M. Romainville & C. Coggi (eds), L’évaluation de l’enseignement par les étudiants: Approches critiques et pratiques innovantes (pp. 145-166). Bruxelles: De Boeck.
- Romainville, M., & Coggi, C. (eds) (2009). L’évaluation de l’enseignement par les étudiants : Approches critiques et pratiques innovantes. Bruxelles: De Boeck.
- Thorpe, S. W. (2002, June). On-line student evaluation of instruction: An investigation of non-response bias. Paper presented at the 42nd annual Forum of the Association for Institutionnal Research, Toronto.
- Toland, M. D., & De Ayala, R. J. (2005). A multilevel factor analysis of students’ evaluation of teaching. Educational & Psychological Measurement, 65 (2), 272-296. doi:10.1177/0013164404268667
- Université du Québec à Rimouski (UQAR) (2011a). Expérimentation de l’évaluation de l’enseignement en ligne (2009-2011). Rapport final ‒ Déposé à la Commission des études, 13 septembre 2011. Rimouski: Bureau du Doyen des études de premier cycle.
- Université du Québec a Rimouski (UQAR) (2011b). Résolution CE-470-5637 de la Commission des études, 13 septembre 2011. Rimouski: Université du Québec à Rimouski.
- Wong, A., & Fitzsimmons, J. (2008). Student evaluation of faculty: an analysis of survey results. U21Global Working Paper series, 3, 1-7.
- Younès, N. (2009). L’évaluation de l’enseignement par les étudiants comme seuil de changement. In M. Romainville & C. Coggi (Eds.), L’évaluation de l’enseignement par les étudiants : Approches critiques et pratiques innovantes (pp. 191-210). Bruxelles: De Boeck.
- Zaltman, G., Pinson, C. R. A., & Angelmar, R. (1973). Metatheory and consumer research. New York, NY: Holt, Reinhart and Winston.