Second Language Assessment

The educational technology and digital learning wiki
Jump to navigation Jump to search

Using ICTS to improve second-language assessment

Patricia Rosen, Memorial University of Newfoundland

Problem

The development of authentic assessment and evaluation situations for second language learners continues to be an issue (Cummins & Devesne, 2009). Aydin (2006) reported on the scarcity of data available on assessment and evaluation in L2 and Ketabi and Ketabi (2014) found that there has not been a consensus on the distinction between types of assessment in second language learning. East and King (2012) pointed out that the ‘washback’ effect is a challenge to authentic assessment as teachers focus on what is required for a final test rather than on the individual needs of learners.

Another challenge has been to create valid evaluations that reflect real life situations without seeming contrived (Laurier, 2004). Luk (2010) found that University Oral Proficiency Interviews were contrived, resulting in “…features of institutionalized and ritualized talk rather than those of ordinary conversation” (p.47). Birch and Volkov (2007) reported the difficulties in reliable assessment of classroom conversation as weaker learners were reluctant to speak and stronger students dominated conversations.

Ketabi and Ketabi (2014) observed that authentic assessment in large classrooms is particularly difficult so traditional testing, focused on ‘…gathering scores…’ (p. 438), was used most often. They noted that this could create stressful situations that hindered student performance. East and King (2012) also identified difficulties such as anxiety and helplessness related to ‘high-stakes testing’, specifically listening tests with ‘once only at normal speed’ (p. 209) input for test-takers. Wagner (2010) argued that widely used listening tests relying only on the auditory channel without visual input are not authentic because they do not allow for the interpretation of visual along with linguistic cues. Jones (2004) also demonstrated a need for incorporating both auditory and visual channels when designing second language vocabulary evaluations, which were traditionally based on either recognition or recall activities.

Role of ICTs

Laurier (2004) highlighted ways in which ICTs were a natural choice for assessment and evaluation. He recommended using ICTs for evaluation management, facilitating authentic evaluation situations, and increasing student control, specifically with e-portfolios. Cummins and Devesne (2009) also suggested that e-portfolios were a way to integrate authentic assessment practices into L2 learning environments. They pointed out that e-portfolios allowed for the inclusion of a variety of artefacts as well as links between goals, progress, and production samples. Luk (2010) concluded that digital technologies could facilitate the collection of data for oral assessment and also suggested the use of ‘oral language portfolios’ with which to organize this data.

There are a number of ways that ICTs can facilitate authentic evaluation situations (Laurier, 2004). The assessment of student conversations was facilitated through video-recording in Luk's (2010) study of school-based oral assessment through group interaction. Jones (2004) reported that using ICT to create multimedia learning and assessment activities with a blend of pictures, text, and sound led to greater vocabulary retention amongst university students. Learners could choose how to access the vocabulary in the assessments and were most successful when the learning and assessment activities were similar in design. East and King (2012) discovered that student comprehension increased when they used technology to slow down the audio of listening comprehension tests. They recommended teachers use this technique as a way to scaffold learning when preparing their students for high-stakes listening tests.

Odo (2012) suggested expanding the idea of language ability to include using technology for communication. He discovered that learners instinctively used the affordances of technology, such as highlighting and playing with font size, to increase their understanding of reading assessments. Aydin (2006) discovered that using computers for writing tests resulted in higher test scores and higher inter-rater reliability for the assessment of university ESL students. Li, Link, Ma, Yang, and Hegelheimer (2014) found that the use of ICTs for automated writing evaluation resulted in ongoing feedback on student performance and increased student revision of their writing. In Wagner’s (2007) study on the use of video for the assessment of listening comprehension among college students, he discovered that video’s affordances could allow for a more authentic evaluation experience as students were able to access non-verbal as well as verbal communication cues for comprehension.

Birch and Volkov (2007) discovered that assessing online discussion commentary of L2 university students gave learners equal opportunities to participate. They reported higher learner engagement than in face-to-face discussions and easier identification of learners at risk in order to provide them with feedback to improve their performance. Vincent-Durroux, Poussard, Lavaur, and Aparicio (2011) supported the use of online language programming and assessment for the improvement of specific skills in university ESL learners.

Obstacles

Works cited