Abstracts
Résumé
Estimer la complexité linguistique est un aspect important de la mesure et de l’évaluation de l’éducation qui peut servir, par exemple, à contrôler la variance indésirable attribuable à la langue ou à fournir aux élèves des textes propices à l’apprentissage. Des techniques de traitement automatique des langues permettent d’extraire différents attributs (features) qui reflètent la complexité du vocabulaire et de la structure des phrases. Dans cet article, nous présentons un nouvel outil appelé ALSI (Analyseur Lexico-Syntaxique Intégré). Nous résumons le fonctionnement de l’outil et présentons les types d’attributs qu’il peut extraire. Nous appliquons ensuite ALSI à 600 textes utilisés dans les écoles primaires et secondaires du Québec et analysons les corrélations entre les attributs et le niveau scolaire associé au texte. Les résultats montrent le potentiel d’ALSI pour la modélisation de la complexité des textes français.
Mots-clés :
- analyse de corpus,
- traitement automatique du langage naturel,
- lisibilité,
- psycholinguistique,
- français
Abstract
Estimating language complexity is an important aspect of educational measurement and assessment that can be used, for instance, to control unwanted variance due to language, or to provide students with texts that are conducive to learning. Automatic language processing techniques can be used to extract various linguistic features that reflect the complexity of vocabulary and sentence structure. In this paper, we present a new tool called ILSA (Integrated Lexico-Syntactic Analyzer), which we developed for research and educational applications. We summarize how the tool works and present the types of attributes it can extract. We then apply ALSI to 600 texts used in Quebec elementary and secondary schools and analyze the correlations between the attributes and the school grade associated with the text. The results show the potential of ALSI for modeling the complexity of French texts.
Keywords:
- corpus analysis,
- natural language processing,
- readability,
- psycholinguistics,
- French
Resumo
Estimar a complexidade linguística é um aspeto importante da medição e da avaliação educacional que pode ser usado, por exemplo, para controlar a variação indesejada devido à linguagem ou para fornecer aos alunos textos que conduzam à aprendizagem. As técnicas de processamento automático de linguagem permitem extrair diferentes atributos (features) que refletem a complexidade do vocabulário e a estrutura das frases. Neste artigo, apresentamos uma nova ferramenta chamada ALSI (Analisador Léxico-Sintético Integrado). Resumimos o funcionamento da ferramenta e apresentamos os tipos de atributos que ela pode extrair. Em seguida, aplicamos o ALSI a 600 textos usados em escolas primárias e secundárias no Québec e analisamos as correlações entre os atributos e o ano letivo associado ao texto. Os resultados mostram o potencial do ALSI para a modelização da complexidade dos textos em francês.
Palavras chaves:
- atributos de texto,
- análise de corpus,
- processamento automático da linguagem natural,
- legibilidade,
- francês
Appendices
Liste de références
- Akoglu, H. (2018). User’s guide to correlation coefficients. Turkish Journal of Emergency Medicine, 18(3), 91‑93. https://doi.org/10/ggw2tg
- Avenia-Tapper, B., & Llosa, L. (2015). Construct Relevant or Irrelevant ? The Role of Linguistic Complexity in the Assessment of English Language Learners’ Science Knowledge. Educational Assessment, 20(2), 95‑111. https://doi.org/10.1080/10627197.2015.1028622
- Benjamin, R. G. (2012). Reconstructing readability : Recent developments and recommendations in the analysis of text difficulty. Educational Psychology Review, 24(1), 63‑88. https://doi.org/10/bdjfkd
- Bishara, A. J., & Hittner, J. B. (2017). Confidence intervals for correlations when data are not normal. Behavior research methods, 49(1), 294-309.
- Blache, P. (2010, juillet). Un modèle de caractérisation de la complexité syntaxique [présentation de conférence]. TALN 2010, Montréal, Canada. https://hal.archives-ouvertes.fr/hal-00576890
- Boyer, J.-Y. (1992). La lisibilité. Revue française de pédagogie, 99, 5‑14. https://doi.org/10/ddnvf8
- Clevinger, A. (2014). Test performance : the influence of cognitive load on reading comprehension [Thèse doctorale, Georgia State University]. https://scholarworks.gsu.edu/psych_theses/123/
- Crossley, S. A. (2020). Linguistic features in writing quality and development : An overview. Journal of Writing Research, 11(3), 415‑443. https://doi.org/10.17239/jowr-2020.11.03.01
- Daoust, F., Laroche, L., & Ouellet, L. (1996). SATO-CALIBRAGE : Présentation d’un outil d’assistance au choix et à la rédaction de textes pour l’enseignement. Revue québécoise de linguistique, 25(1), 205‑234. https://doi.org/10/ghhd3p
- Dascalu, M., Dessus, P., Trausan-Matu, Ş., Bianco, M., & Nardy, A. (2013). ReaderBench, an environment for analyzing text complexity and reading strategies. Dans H. C. Lane, K. Yacef, J. Mostow, & P. Pavlik (dir.), Artificial Intelligence in Education (p. 379‑388). Springer. https://doi.org/10/ghjqdq
- De Marneffe, M. C., Dozat, T., Silveira, N., Haverinen, K., Ginter, F., Nivre, J., & Manning, C. D. (2014). Universal Stanford dependencies : A cross-linguistic typology. Dans Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14) (p. 4585-4592). European Language Resources Association (ELRA).
- Dempster, E. R., & Reddy, V. (2007). Item readability and science achievement in TIMSS 2003 in South Africa. Science Education, 91(6), 906‑925. https://doi.org/10/cd687q
- Feng, L., Jansche, M., Huenerfauth, M., & Elhadad, N. (2010). A comparison of features for automatic readability assessment. Dans COLING ‘10 : Proceedings of the 23rd International Conference on Computational Linguistics (p. 276‑284). http://www.aclweb.org/anthology/C10-2032
- Fergadiotis, G., Wright, H. H., & Green, S. B. (2015). Psychometric Evaluation of Lexical Diversity Indices : Assessing Length Effects. Journal of Speech, Language, and Hearing Research : JSLHR, 58(3), 840‑852. https://doi.org/10/gh62rx
- Flesch, R. (1948). A new readability yardstick. Journal of Applied Psychology, 32(3), 221. https://doi.org/10/bzrfs6
- François, T. (2009). Combining a statistical language model with logistic regression to predict the lexical and syntactic difficulty of texts for FFL. Dans Proceedings of the Student Research Workshop at EACL 2009 (p. 19-27). Association for Computational Linguistics.
- François, T. (2015). When readability meets computational linguistics : A new paradigm in readability. Revue française de linguistique appliquée, 20(2), 79‑97. https://doi.org/10/gh5tmg
- François, T., & Fairon, C. (2012). An “AI readability” formula for French as a foreign language. Dans Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (p. 466-477). Association for Computational Linguistics.
- François, T., & Miltsakaki, E. (2012). Do NLP and machine learning improve traditional readability formulas ? Dans Proceedings of the First Workshop on Predicting and Improving Text Readability for target reader populations (p. 49-57). Association for Computational Linguistics.
- Gale, W. A., & Sampson, G. (1995). Good-Turing frequency estimation without tears. Journal of Quantitative Linguistics, 2(3), 217‑237. https://doi.org/10/bnnzxz
- Gough, P. B., & Tunmer, W. E. (1986). Decoding, reading, and reading disability. Remedial and Special Education, 7(1), 6‑10. https://doi.org/10.1177/074193258600700104
- Graesser, A. C., McNamara, D. S., & Kulikowich, J. M. (2011). Coh-Metrix : Providing Multilevel Analyses of Text Characteristics. Educational Researcher, 40(5), 223‑234. https://doi.org/10/cwtd84
- Graesser, A. C., McNamara, D. S., Louwerse, M. M., & Cai, Z. (2004). Coh-Metrix : Analysis of text on cohesion and language. Behavior research methods, instruments, & computers, 36(2), 193‑202. https://doi.org/10/ft568w
- Guillaume, B., De Marneffe, M.-C., & Perrier, G. (2019). Conversion et améliorations de corpus du français annotés en Universal Dependencies. Traitement automatique des langues, 60(2), 71‑95. https://hal.inria.fr/hal-02267418
- Han, J., Kamber, M., & Pei, J. (2011). Data Mining : Concepts and Techniques (3eéd.). Morgan Kaufmann. https://doi.org/10.1016/B978-0-12-381479-1.00002-2
- Joint Committee on Standards for Educational and Psychological Testing. (2014). Standards for Educational and Psychological Testing. American Educational Research Association.
- Karegowda, A. G., Manjunath, A. S., & Jayaram, M. A. (2010). Comparative study of attribute selection using gain ratio and correlation based feature selection. International Journal of Information Technology and Knowledge Management, 2(2), 271‑277.
- Kintsch, W., & Van Dijk, T. A. (1978). Toward a model of text comprehension and production. Psychological Review, 85(5). https://doi.org/10.1037/0033-295X.85.5.363
- Kintsch, W., & Vipond, D. (2014). Reading comprehension and readability in educational practice and psychological theory. Dans L.-G. Nilsson, T. Archer (dir.), Perspectives on learning and memory (p. 329‑365). Psychology Press.
- Kuhn, M. (2011). Data Sets and Miscellaneous Functions in the caret Package. http://ftp.uni-bayreuth.de/math/statlib/R/CRAN/doc/vignettes/caret/caretMisc.pdf
- Lane, S., Raymond, M. R., & Haladyna, T. M. (dir.). (2015). Handbook of Test Development (2e éd.). Routledge.
- Lété, B. (2004). MANULEX : une base de données du lexique écrit adressé aux élèves. Dans É. Callaque, J. David (dir.) Didactique du lexique (p. 241-257). De Boeck.
- Loye, N. (2018). Et si la validation n’était pas juste une suite de procédures techniques… Mesure et évaluation en Éducation, 41(1), 97‑123. https://doi.org/10.7202/1055898ar
- Maas, H. D. (1972). Über den zusammenhang zwischen wortschatzumfang und länge eines textes. Zeitschrift für Literaturwissenschaft und Linguistik, 2(8), 73.
- Martiniello, M. (2009). Linguistic Complexity, Schematic Representations, and Differential Item Functioning for English Language Learners in Math Tests. Educational Assessment, 14(3‑4), 160‑179. https://doi.org/10/fcj83v
- McNamara, D., & Graesser, A. (2011). Coh-Metrix : An Automated Tool for Theoretical and Applied Natural Language Processing. Dans P. M. McCarthy (dir.), Applied natural language processing and content analysis : Identification, investigation, and resolution, (p. 188‑205). IGI Global. https://doi.org/10/ghp3zg
- McNamara, D. S., Graesser, A. C., & Louwerse, M. M. (2012). Sources of text difficulty : Across genres and grades. Dans J. Sabatini (dir.), Measuring up : Advances in how we assess reading ability (p. 89‑116). R&L Education.
- Mesnager, J. (1989). Lisibilité des textes pour enfants : Un nouvel outil ? Communication & Langages, 79(1), 18‑38. https://doi.org/10/bb9gfg
- Milone, M. (2014). Development of the ATOS readability formula. Renaissance Learning Inc.
- O’Reilly, T., & McNamara, D. S. (2007). Reversing the reverse cohesion effect : good texts can be better for strategic, high-knowledge readers. Discourse Processes, 43(2), 121‑152. https://doi.org/10.1080/01638530709336895
- Persson, T. (2016). The language of science and readability : correlations between linguistic features in TIMSS science items and the performance of different groups of Swedish 8th grade students. Nordic Journal of Literacy Research, 2(1). https://doi.org/10.17585/njlr.v2.186
- Ravid, D. (2005). Emergence of linguistic complexity in later language development : evidence from expository text construction. Dans D. D. Ravid et H. B.-Z. Shyldkrot (dir.), Perspectives on Language and Language Development : Essays in Honor of Ruth A. Berman (p. 337‑355). Springer US. https://doi.org/10.1007/1-4020-7911-7_25
- R Core Team (2022). R : A language and environment for statistical computing. R Foundation for Statistical Computing. https://www.R-project.org/.
- Sherstinova, T., Ushakova, E., & Melnik, A. (2020). Measures of Syntactic Complexity and their Change over Time (the Case of Russian). 27th Conference of Open Innovations Association (FRUCT) (p. 221‑229). https://doi.org/10.23919/FRUCT49677.2020.9211027
- Smith, D. R., Stenner, A. J., Horabin, I., & Smith, M. (1989). The Lexile scale in theory and practice. Final report. MetaMetrics.
- Stanké, B., Le Mené, M., Rezzonico, S., Moreau, A., Dumais, C., Robidoux, J., Dault, C., & Royle, P. (2019). ÉQOL : Une nouvelle base de données québécoise du lexique scolaire du primaire comportant une échelle d’acquisition de l’orthographe lexicale. Corpus, 19. https://doi.org/10.4000/corpus.3818
- Straka, M., Hajic, J., & Straková, J. (2016). UDPipe : trainable pipeline for processing CoNLL-U files performing tokenization, morphological analysis, pos tagging and parsing. Dans Proceedings of the Tenth International Conference on Language Resources and Evaluation(LREC’16) (p. 4290-4297). European Language Resources Association.
- Szmrecsányi, B. (2004). On operationalizing syntactic complexity. Dans G. Purnelle, C. Fairon & A. Dister (dir.). Le poids des mots. Proceedings of the 7th International Conference on Textual Data Statistical Analysis. (Vol. 2, p. 1032-1039). Leuven University Press.
- Taneja, S., Gupta, C., Goyal, K., & Gureja, D. (2014). An enhanced k-nearest neighbor algorithm using information gain and clustering. Dans 2014 Fourth International Conference on Advanced Computing Communication Technologies (p. 325‑329). https://doi.org/10/ghndnz
- Todirascu, A., François, T., Bernhard, D., Gala, N., & Ligozat, A. L. (2016). Are cohesive features relevant for text readability evaluation ? Dans 26th International Conference on Computational Linguistics (COLING 2016) (p. 987-997). https://aclanthology.org/C16-1
- Vandeweerd, N. (2021). fsca : French syntactic complexity analyzer. International Journal of Learner Corpus Research, 7(2), 259‑274. https://doi.org/10.1075/ijlcr.20018.van
- Visone, J. D. (2009). The Validity of Standardized Testing in Science. American Secondary Education, 38(1), 46‑61. https://www.jstor.org/stable/41406066
- Welbers, K., van Atteveldt, W., & Kleinnijenhuis, J. (2020). Extracting semantic relations using syntax : an R package for querying and reshaping dependency trees. ComputationalCommunicationResearch, 3(2), 1-16.
- Wijffels, J. (2022). udpipe : Tokenization, Parts of Speech Tagging, Lemmatization and Dependency Parsing with the UDPipe NLP Toolkit. R package version 0.8.9. https://CRAN.R-project.org/package=udpipe
- Yang, Y., & Pedersen, J. O. (1997). A comparative study on feature selection in text categorization. Dans Proceedings of the 14th International Conference on Machine Learning (p. 412-420). Morgan Kaufmann Publishers.
- Zakaluk, B. L., & Samuels, S. J. (1988). Readability : Its Past, Present, and Future. International Reading Association. https://eric.ed.gov/?id=ED292058