This research contributes to understandings of how refugee-background adult L2 learners with emerging literacy make meaning from multimodal assessment texts. It focuses on assessments used in an English as Second Language (ESL)/English literacy program for refugee-background adults. Meaning-making involves both perception and production; it is inherently dialogic, and bound in social semiotics (Kress, 1994). While much research has examined the psycholinguistic aspects of adult L2 learners' literacy development (e.g., Kurvers, 2002; Young-Scholten & Naeb, 2010), many questions remain about the social semiotics of literacy. Scarce research has been completed concerning social semiotics and the visual/multimodal literacy of this population (e.g., Bruski, 2012).
Building on the prior scholarship and a pilot study, this research investigates meaning-making in language and literacy assessments from a social semiotic perspective. It explores how refugee-background adult L2 learners with emerging literacy construct meaning from multimodal elements (e.g., clipart images, photographs, lines, boxes, typed words) and test genre elements (e.g., instructions, multiple-choice questions, fill-in-the-blank questions) in assessment texts. In other words, the study examines to what extent the intended meaning of assessment prompts aligns with the perceived meaning on the part of participants. It investigates: 1) how this population self-articulates their understanding of the various multimodal components and elements used in the design of the assessment texts, and 2) which strategies they rely on to make meaning.
Data come from textual analysis of a set of low-stakes, in-house assessments used in a state-funded, community-based ESL program (that includes attention to English literacy) for adults from refugee backgrounds, as well as two experimental assessments that were created through iterative design as part of the research; and semi-structured interviews with 14 participants enrolled in a literacy-level class. Data analysis utilized a critical multimodal social semiotic approach, informed by systemic functional linguistics (Kress, 2010; Kress & van Leeuwen, 2006; Pennycook, 2001). Data was coded according to metafunction and theme, with results organized by test genre element and multimodal component.
The results exposed tensions between participants' responses to textual and visual prompts and the expectations of test designers. The findings reveal that textual composition and assessment practices may be inadvertently biased against individuals with diverging literacy profiles. Implications for test design and evaluation frameworks are revealed through this research concerning a non-WEIRD sample.