Abstracts
Résumé
Cet article documente le processus de simplification de la démarche d’évaluation de l’enseignement par les étudiants. Cette simplification repose sur la conception d’un instrument abrégé qui permet de porter des appréciations valides et fiables sur la qualité de l’enseignement. Une comparaison des qualités psychométriques entre l’instrument original et abrégé est ainsi réalisée. En conclusion, certaines modifications de la version abrégée sont proposées, mais cette version représente néanmoins un substitut adéquat à la version originale.
Mots-clés :
- évaluation,
- enseignement,
- étudiants,
- validité,
- comportement séquentiel,
- fiabilité,
- questionnaire abrégé
Abstract
This paper documents the process of simplifying the student course evaluation procedure. The simplification relies on the conception of a short questionnaire that supports the enunciation of valid and reliable judgments about the quality of a course. For such, a comparison of the psychometric qualities of the original and short versions is realized. To conclude, some modifications of the short version are proposed, but it nevertheless represents an adequate substitute to the original version.
Keywords:
- evaluation,
- teaching,
- students,
- validity,
- sequential behavior,
- reliability,
- short questionnaire
Resumo
Este artigo documenta o processo de simplificação do procedimento de avaliação do ensino por via dos estudantes. Esta simplificação baseia-se na conceção de um instrumento abreviado que permite realizar apreciações válidas e fiáveis sobre a qualidade do ensino. Assim, alcança-se uma comparação das qualidades psicométricas entre o instrumento original e o abreviado. Finalmente, são propostas certas alterações à versão abreviada, mas esta versão, não obstante, representa um substituto adequado da versão original.
Palavras chaves:
- avaliação,
- ensino,
- estudantes,
- validade,
- comportamento sequencial,
- fiabilidade,
- questionário abreviado
Appendices
Références
- American Educational Research Association (1999). Standards for educational and psychological testing. Washington, D.C.: American Educational Research Association.
- André, N., Loye, N. & Laurencelle, L. (2015). La validité psychométrique : un regard global sur un concept centenaire, sa genèse, ses avatars. Mesure et évaluation en éducation, 37(3), 125-148. Repéré à : https://www.researchgate.net/publication/281639436_La_validite_psychometrique_un_regard_global_sur_le_concept_centenaire_sa_genese_ses_avatars
- Bernard, H. (2011). Comment évaluer, améliorer, valoriser l’enseignement supérieur. Bruxelles : De Boeck.
- Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2004). The concept of validity. Psychological Review, 111, 1061-1071. doi: 10.1037/0033-295X.111.4.1061
- Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York: Guilford.
- Carley-Baxter, L. R., Peytchev, A. A., & Black, M. L. (2010, May). Effect of questionnaire structure on nonresponse and measurement error: Sequential vs. grouped placement of filter questions. Paper presented at the annual conference of the American Association for Public Opinion Research, Chicago (IL).
- Cizek, G. J. (2012). Defining and distinguishing validity: Interpretations of score meaning and justifications of test use. Psychological Methods, 17, 31-43. doi: 10.1037/ a0026975
- d’Apollonia, S., & Abrami, P. C. (1997). Navigating student ratings of instruction. American Psychologist, 52(11), 1198-1208. doi: 10.1037/0003-066X.52.11.1198
- Detroz, P. (2008). L’évaluation des enseignements par les étudiants : état de la recherche et perspectives. Revue française de pédagogie, 165, 117-135. doi: 10.4000/rfp.1165
- Dolmans, D. H. J. M., & Ginns, P. (2005). A short questionnaire to evaluate the effectiveness of tutors in PBL: Validity and reliability. Medical Teacher, 27(6), 534-538. doi: 10.1080/01421590500136477
- Donovan, J., Mader, C. E., & Shinsky, J. (2006). Constructive student feedback: On-line vs. traditional course evaluations. Journal of Interactive Online Learning, 5(3), 283-296.
- Feick, L. F. (1989). Latent class analysis of survey questions that include don’t know responses. Public Opinion Quarterly, 53, 525-547. doi: 10.1086/269170
- Fontaine, S. (2009). Des expériences actuelles d’évaluation des enseignements vers des démarches adaptées aux 2e et 3e cycles. Dans M. Romainville & C. Coggi (dir.), L’évaluation de l’enseignement par les étudiants : approches critiques et pratiques innovantes (pp. 123-144). Bruxelles : De Boeck.
- Gal, Y., & Gal, A. (2015). Knowledge bias by utilizing the wording on feedback questionnaires : A case study of an Israeli college. Journal of the Knowledge Economy, 1-18.
- Green, S. B., & Hershberger, S. L. (2000). Correlated errors in true score models and their effect on coefficient Alpha. Structural Equation Modeling: A Multidisciplinary Journal, 7(2), 251-270. doi: 10.1207/S15328007SEM0702_6
- Harvey, L. & Hébert, M.-H. (2012). Évaluation de la qualité de l’enseignement par les étudiantes et étudiants : qualités psychométriques et comparaison des conditions de passation. Mesure et évaluation en éducation, 35(3), 31-60. doi: 10.7202/ 1024669ar
- Holman, R., Glas, C. A. W., Lindeboom, R., Zwinderman, A. H., & de Hann, R. J. (2004). Practical methods for dealing with “not applicable” item responses in the AMC Linear Disability Score project. Health and Quality of Life Outcomes, 2(29). doi: 10.1186/1477-7525-2-29
- Huxham, M., Laybourn, P., Cairncross, S., Gray, M., Brown, N., Goldfinch, J., & Earl, S. (2008). Collecting student feedback: A comparison of questionnaire and other methods. Assessment & Evaluation in Higher Education, 33(6), 675-686. doi: 10.1080/02602930701773000
- Hwang, J., Petrolia, D. R., & Interis, M. G. (2014). Consequentiality and opt-out responses in stated preference surveys. Agricultural and Resource Economics Review, 43(3), 471-488. Retrieved from: https://www.researchgate.net/publication/269036653_Consequentiality_and_Opt-out_Responses_in_Stated_Preference_Surveys
- Krosnick, J. A., Holbrook, A. L., Berent, M. K., Carson, R. T., Hanemann, W. M., Kopp, R. J., … Conaway, M. (2002). The impact of “no opinion”response options on data quality: Non-attitude reduction or an invitation to satisfice? Public Opinion Quarterly, 66, 371-403. Retrieved from: https://pprg.stanford.edu/wp-content/uploads/2002-The-impact-of-no-opinion-response-options-on-data-quality-N.pdf
- Marsh, H. W., & Bailey, H. (1993). Multidimensional students’ evaluations of teaching effectiveness: A profile analysis. Journal of Higher Education, 64(1), 1-18. doi: 10.2307/2959975
- Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. American Psychologist, 50, 741-749. doi: 10.1037//0003-066X.50.9.741
- Raykov, T. (2004). Behavioral scale reliability and measurement invariance evaluation using latent variable modeling. Behavior Therapy, 35, 299-331. doi: 10.1016/S0005-7894(04)80041-8
- Redding, C. A., Maddock, J. E., & Rossi, J. S. (2006). The sequential approach to measurement of health behavior construct: Issues in selecting and developing measures. Californian Journal of Health Promotion, 4(1), 83-101. Retrieved from: http://www.cjhp.org/Volume4_2006/Issue1/83-101-redding.pdf
- Saupe, J. L., & Eimers, M. T. (2012, June). Alternative estimates of the reliability of college grade point averages. Paper presented at the 52nd annual forum of the Association for Institutional Research, New Orleans, LA.
- Thorpe, S. W. (2002). On-line student evaluation of instruction: An investigation of non-response bias. Paper presented at the 42nd annual forum of the Association for Institutional Research, Toronto, Canada.