Abstracts
Résumé
Malgré l’importance de la compréhension des concepts dans l’apprentissage des sciences, il existe peu de tests qui permettent de mesurer le niveau de compréhension des élèves. Par conséquent, l’objectif de cet article est de développer un test mesurant le niveau de compréhension des concepts cinématiques en physique au secondaire qui soit valide à des fins diagnostiques. L’analyse des réponses à l’aide d’un modèle de réponse à l’item montre que les niveaux de compréhension définis par le test sont distincts les uns des autres et forment une échelle hiérarchique et cumulative.
Mots-clés :
- Mesure de la compréhension,
- cinématique,
- modèle de réponse à l’item,
- super-item,
- taxonomie SOLO
Abstract
In spite of the importance of understanding in science learning, there is a shortage of tests that measure the level of understanding of high school students in physics. Thus, the objective of our research is to develop a test measuring the students’ level of understanding of kinematical concepts in high school physics which is valid as a diagnostic tool. The analysis of answers by an item response model shows that the levels of understanding defined by the test are well separated and constitute a hierarchical and cumulative scale.
Keywords:
- Comprehension assessment,
- kinematics,
- item response model,
- superitem,
- SOLO taxonomy,
- formative or diagnostic assessment
Resumo
Apesar da importância da compreensão dos conceitos na aprendizagem das ciências, existem poucos testes que possibilitem medir o nível de compreensão dos alunos. Assim, o objectivo deste artigo é o de desenvolver um teste que permita medir o nível de compreensão dos conceitos cinemáticos em Física, no ensino Secundário, que seja válido para elaborar diagnósticos. A análise das respostas, com a ajuda de um modelo de resposta ao item, mostra que os níveis de compreensão definidos pelo teste são distintos uns dos outros e formam uma escala hierárquica e cumulativa.
Palavras chaves:
- Avaliação da compreensão,
- cinemática,
- modelo de resposta ao item,
- super-item,
- taxonomia SOLO,
- avaliação formativa ou diagnóstica
Download the article in PDF to read it.
Download
Appendices
Références
- Andrich, D. (1988). Rasch models for measurement. Newbury Park: Sage Publications.
- Arons, A.B. (1990). A guide to introductory physics teaching. Toronto: John Wiley & Sons.
- Baillargeon, G. (1984). Techniques statistiques. Trois-Rivières (Québec): Les Éditions SMG.
- Bereiter, C. (2002). Education and mind in the knowledge age. Mahwah (NJ): Lawrence Erlbaum Associates.
- Biggs, J. (2003). Teaching for quality learning at university: What the student does (2e édition). Berkhire (United Kingdom): Open University Press.
- Biggs, J., & Collis, K.F. (1982). Evaluating the quality of learning: The SOLO taxonomy (Structure of the Observed Learning Outcome). Toronto: Academic Press.
- Bliss, J., Morrison, I., & Ogborn, J. 1988. A longitudinal study of dynamics concepts. International Journal of Science Education, 10(1), 99-110.
- Bond, T.G., & Fox, C.M. (2001). Applying the Rasch model: Fundamental measurement in the human sciences. Mahwah (NJ): Lawrence Erlbaum Associates.
- Collis, K.F., & Biggs, J.B. (1991). Developmental determinants of qualitative aspects of schooling. In G. Evans (éd.), Learning and teaching cognitive skills (pp. 185-207). Victoria (Australie): Australian Council for Educational Research (ACER).
- Collis, K.E., & Davey, H.A. (1986). A technique for evaluating skills in high school science. Journal of Research in Science Teaching, 23(7), 651-663.
- Dickie, L.O. (1988). Preconcepts in physics (Rapport au John Abbott College Research and Development Committee). East Lansing, MI: National Center for Research on Teacher Learning (Base de données ERIC no ED318533).
- Embretson, S.E., & Reise, S.P. (2000). Item response theory for psychologist. Mahwah (NJ): Lawrence Erlbaum Associates.
- Haladyna, T.M. (2004). Developing and validating multiple-choice test items (3e édition). Hillsdale (NJ): Lawrence Erlbaum Associates.
- Halloun, I. (1996). Schematic modeling for meaningfull learning of physics. Journal of Research in Science Teaching, 33(9), 1019-1041.
- Halloun, I.A., & Hestenes, D. (1985). The initial knowledge state of college physics students. American Journal of Physics, 53(11), 1043-1055.
- Hattie, J., & Purdie, N. (1998). The Solo model: Addressing fundamental measurement issues. In B. Hart & G. Boulton-Lewis (éds), Teaching and learning in higher education (pp. 145-176). Australian Council for Educational Research: Acer Press.
- Hestenes, D., Wells, M., & Swackhamer, G. (1992). Force concept inventory. The Physics Teacher, 30, 141-158.
- Huffman, D., & Heller, P. (1995). What does the force concept inventory actually measure? The Physics Teacher, 33, 138-143.
- Ingebo, G.S. (1997). Probability in the measure of achievement. Chicago (IL): MESA Press.
- Laveault, D., & Grégoire, J. (2002). Introduction aux théories des tests en sciences humaines (2e édition). Paris: De Boeck Université.
- Legendre, M.-F. (2002). Le rôle du raisonnement qualitatif dans les processus de changement conceptuel et ses implications pour l’enseignement et la formation des enseignants. In R.M.J. Toussaint (éd.), Changement conceptuel et apprentissage des sciences: Recherches et pratiques (pp. 177-201). Outremont (Québec): Les Éditions LOGIQUES.
- Linacre, J.M. (2002). Optimizing rating scale category effectiveness. Journal of Applied Measurement, 3(1), 85-106.
- Linacre, J.M., & Wright, B.D. (1993). A user’s guide to Facets: Rasch measurement computer program. Chicago: Mesa Press.
- Masters, G.N., & Mislevy, R.J. (1993). New views of student learning: Implications for educational measurement. In Norman Frederiksen, Robert J. Mislevy & Isaac I. Bejar (éds), Test theory for a new generation of tests (pp. 219-242). Hillsdale (NJ): Lawrence Erlbaum Associates.
- Oliva, J.M. (1999). Structural patterns in students’ conceptions in mechanics. International Journal of Science Education, 21(9), 903-920.
- Penta, M., Arnould, C., & Decruynaere, C. (2005). Développer et interpréter une échelle de mesure: Applications du modèle de Rasch. Sprimont (Belgique): Pierre Mardaga éditeur.
- Sadler, P.M. (2000). The relevance of multiple-choice testing in assessing science understanding. In J.J. Mintzes, J.H. Wandersee & J.D. Novak (éds), Assessing science understanding: A human constructivist view (pp. 249-278). Toronto: Academic Press.
- Siegel, S., & Castellan, N.J., Jr. (1988). Nonparametric statistics for the behavioral sciences (2e édition). Montréal: McGraw-Hill.
- Stouffer, S.A., Guttman, L., Suchman, E.A., Lazarfeld, P.F., Star, S.A., & Clauser, J.A. (1950). Measurement and prediction. New York: John Wiley & Sons.
- Treagust, D.F. (1995). Diagnostic assessment of students’ science knowledge. In S.M. Glynn & R. Duit (éds), Learning science in the schools: Research informing practice (pp. 327-346). Mahwah (NJ): Lawrence Erlbaum Associates.
- Trowbridge, D.E., & McDermott, L. (1980). Investigation of student understanding of the concept of velocity in one dimension. American Journal of Physics, 48(12), 1020-1028.
- Trowbridge, D.E., & McDermott, L. (1981). Investigation of the concept of acceleration in one dimension. American Journal of Physics, 49(3), 242-253.
- Trudel, L. (2005). Impact d’une méthode de discussion sur la compréhension des concepts de la cinématique chez les élèves de cinquième secondaire. Thèse de doctorat, Université du Québec à Montréal, Montréal.
- White, R., & Gusntone, R. (1992). Probing understanding. London: The Falmer Press.
- Wilson, M. (1989). A comparison of deterministic and probabilistic approaches to measuring learning structures. Australian Journal of Education, 33(2), 127-140.
- Wilson, M. (2005). Constructing measures: An item response modeling approach. Mahwah (NJ): Lawrence Erlbaum Associates.
- Wright, B.D., & Masters, G.N. (1982). Rating scale analysis. Chicago: Mesa Press.
- Wright, B.D., & Stone, M.H. (1979). Best test design. Chicago: Mesa Press.