Plenary

GUEST SPEAKERS

Anna Cardinaletti (Università Ca’ Foscari)

Friday 25 of october - 14h -15h15  - Amphi G

On accessible language testing for students with disabilities

In this talk, the results of a project funded by the Italian Ministery MIUR are presented. Italian legislation establishes national guidelines that set out accommodations and exemptions for students who are medically certified to guarantee their access to education. As a result, increasing numbers of disabled students are continuing their studies at university level.

Although the increasing numbers of disabled students enrolling in tertiary level education can only be seen in a positive light, universities are still insufficiently prepared for dealing with the many issues that this trend raises.

For enrollment, Italian universities require mandatory certification of general English skills at the CEFR B1 level. In addition, students must often demonstrate their skills in written Italian as a mandatory entry requirement.

The main focus of the project is to provide students with sensory, language, and learning disabilities with equal opportunities in the high-stakes language testing required for university entrance, while maintaining those features essential to ethical testing, test validity, and fairness.

Little attention is given to the consequences of dyslexia on morpho-syntactic and textual dimensions of language and on metalinguistic competences, which are required in advanced study. Without sufficient theoretical awareness, the accommodations and exemptions used in testing at the university level risk becoming ineffective. In many cases, additional test time, vocal synthesizers and digital dictionaries have failed to produce the results desired.

The difficulties faced by deaf students concern not only the oral dimension of language learning, but also the written dimension. Nevertheless, despite difficulties in specific spheres of language (spelling, functional morpho-syntactic elements, specialist lexis), the language level reached can be sufficient for university study. It is therefore important to guarantee equal opportunities to deaf students who possess adequate cognitive abilities beyond the difficulties encountered in language.

 The project has designed a series of studies to examine different aspects of computer-based tests in the native language (Italian) and foreign languages (English in particular). The results have allowed us to develop guidelines regarding valid tests of Italian and English that are accessible to disabled students and to project tailored courses in preparation for the university entrance tests in Italian and English, as well as other language exams in all degree courses.

Anna Cardinaletti è Professoressa ordinaria di Glottologia e Linguistica presso il Dipartimento di Studi Linguistici e Culturali Comparati dell’Università Ca’ Foscari Venezia, dove insegna Linguistica teorica e applicata, Linguistica clinica e Linguistica italiana. Si occupa di linguistica teorica e sintassi comparativa e delle loro applica­zioni all’acquisizione tipica e atipica dell’italiano L1, alla didattica delle lingue e alla comprensione delle disabilità linguistiche e comunicative (in particolare, dislessia e sordità). Ha coordinato numerosi progetti di ricerca nazionali e internazionali su que­ste tematiche, tra cui un progetto finanziato dal MIUR sulla valutazione linguistica accessibile, ed è socia fondatrice dello spin-off VEASYT. E' autrice di oltre 150 pubblicazioni scientifiche. Tra gli ultimi lavori, due volumi usciti per FrancoAngeli: La lingua dei segni nelle disabilità comunicative (con C. Branchini, 2016) e Test linguistici accessibili per studenti sordi e con DSA. Pari opportunità per l'accesso all'Università (2018).

 

Tim McNamara (University of Melbourne)

Thursday 24 of october - 14h15-15h30 Amphi G

Fairness, Justice, and Language Assessment

Language assessments may be used to make consequential decisions about individuals, including admission to higher education, employment, residency and citizenship.  These uses of language tests raise issues of validity:  how reasonable and defensible are the decisions about individuals made on the basis of such tests?  We can distinguish two aspects of this question.  The first concerns the capacity of a test to make meaningful distinctions among individuals, and this depends on its technical quality.  Here we can talk about the narrower question of the fairness of the test.  The second is the broader question of the defensibility of the policy whereby a person is required to be tested at all, that is, the use of the test.  Here, we can talk about question of the justice of the test.  This distinction is discussed both in relation to theories of test validity, and in relation to currently used assessment frameworks such as the Common European Framework of Reference (CEFR) and international tests of English for Academic Purposes such as IELTS.

Tim McNamara is Redmond Barry Distinguished Professor in the School of Language and Linguistics at The University of Melbourne, where he was involved in the founding of the graduate program in applied linguistics and the Language Testing Research Centre. His language testing research has focused on performance assessment, theories of validity, the use of Rasch models, and the social and political meaning of language tests. He developed the Occupational English Test, a specific purpose test for health professionals, and was part of the research teams involved in the development of both IELTS (British Council/University of Cambridge/ IDP) and TOEFL-iBT (Educational Testing Service). He is the author of Measuring Second Language Performance (1996, Longman), Language Testing (2000, OUP) Language Testing: The Social Dimension (2006, Blackwell, with Carsten Roever), Fairness and Justice in Language Assessment (2019, OUP, with Ute Knoch and Jason Fan) and Language and Subjectivity (2019, CUP). In 2015 he was awarded the Distinguished Achievement Award of the International Language Testing Association; he was President of the American Association for Applied Linguistics (AAAL) in 2017-2018.  Tim is a Fellow of the UK Academy of Social Sciences, and a Fellow of the Australian Academy of the Humanities.

 

James Purpura (Columbia University)

Friday 25 of October - 9h-10h15 Amphi G

 

Investigating the Effects of Assistance on Performance in a Scenario-Based Writing Assessment

The ability to perform well in L2 academic or workplace contexts is obviously predicated upon the ability to access and utilize a range of topical and communicative resources to perform simple and complex tasks in some real-life domain of language use. Performance in these settings, however, also depends upon an individual’s ability to process new topical and linguistic content, often with varying types of assistance, and to flexibly integrate these new understandings into products that reflect the thinking of more than one person. Thus, success in real world contexts often depends not only on the static display of topical and linguistic competences, but also on the dynamic development of these competencies.

An example of such a situation might be when ecologists are placed in a situation where they have to work collaboratively to reason through a problem related to the potential impact on an ecosystem of the potential loss of one species in the food chain due to some new construction. A successful resolution to this problem is likely to involve the acquisition and integration of topic-specific content and associated language, as this pertains to the problem. Success in this situation is also likely to be moderated by factors such as problem comprehension, peer instruction, reasoning skills, cognitive load, feedback and assistance processes, collaboration strategies, interactional strategies, and socio-affective strategies.

Thus, given the number of factors involved in this situation, what topical and linguistic outcomes would we want to measure, and how could we account for the moderating effects of these factors (e.g., lack of background knowledge) on performance, if we were to simulate this situation in a language test? Or how could we structure the assessment event in a way that would require examinees to engage in the kinds of complex processes they might encounter in a real-life problem solving task of the same sort? One way to do this is through a technique called “scenario-based assessment,” where a carefully sequenced set of tasks are presented to students on the path to problem resolution. And one way to account for the multitude of factors involved in the assessment design is to take a learning-oriented approach to the assessment design.

Thus, the purpose of the current paper is to investigate the effects of different types of assistance on the display and development of topical and linguistic knowledge by examinees engaged in a scenario requiring them to solve a science problem. In this talk I will first define scenario-based assessment and describe how scenarios, conceptualized as a purposeful set of carefully sequenced, thematically-related tasks designed to simulate real-life performance, can provide a concrete mechanism for measuring an expanded range of theoretical constructs. Then, I will briefly describe a learning-oriented approach to assessment and show how this approach was used to design the scenario tasks. Finally, I will the study and report on some of the quantitative and qualitative results related to how examinees not only displayed, but also developed topical and linguistic competencies throughout the scenario as a result of assistance. In this section I will briefly describe how the writing tasks were scored using a content-responsible rubric.

James E. Purpura is Professor of linguistics and education in the Applied Linguistics and TESOL Program at Teachers College, Columbia University, where he teaches L2 assessment and L2 research methods. Besides publications in journals and edited volumes, Jim’s books include: Strategy use and language test performance: A structural equation modeling approach (CUP), Assessing grammar (CUP) and The writings of L. F. Bachman: Assuring that what we count counts” in language assessment (with A. J. Kunnan) (Routledge). Jim is currently working on Learning-oriented assessment in language classrooms: Using assessments to gauge and promote language learning (with C. E. Turner) (Routledge). Jim is co-editor of Language Assessment Quarterly, and is series co-editor of New Perspectives on Language Assessment (Routledge) and Language Assessment at ETS: Innovation and Validation (Routledge). He was the President of the International Language Testing Association (2007-2008), and is an expert consultant for the EALTA. Having served on the TOEFL Committee of Examiners at ETS, Jim is on the Defense Language Testing Advisory Panel in Washington, D.C., and is a member of the committee on Foreign Language Assessment for the U.S. Foreign Service Institute in the National Academy of Sciences, Engineering and Medicine’s Division of Behavioral and Social Sciences and Education. In 2017, Jim was a Fulbright Scholar at the University for Foreigners of Siena, Italy, where he is currently involved in two Scenario-Based Assessment projects.

 

Table ronde  - Regards croisés sur l'évaluation : Jean-Paul Narcy-Combes (Université Sorbonne Nouvelle) - Sébastien Georges (France Education International)

Online user: 23