Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Reporting Fine-grained Feedback from a Summative Language Proficiency Assessment Using Diagnostic Classification Modeling (DCM): A Feasibility Study

Abstract

Assisting states to meet federal requirements related to EL students are summative English language proficiency (ELP) assessments, administered annually at the end of each school year. These assessments are increasingly expected to play a role in providing information about students’ language proficiency to educators and other local stakeholders for informing instruction. However, this application raises several key questions. As large-scale instruments, do the summative assessments provide information that is both meaningful and useful to local educators and school administrators? Through what process are the summative assessments expected to “work together” with other assessments, including classroom assessments? What evidence exists to demonstrate how capable the current summative assessments are at fulfilling this role? To investigate these questions, this study sets out to explore a possible approach to improving the feedback provided by the reading subsection of an ELP summative assessment by retrofitting a diagnostic classification model (DCM), a class of scoring model used to understand and report test data specifically in ways that can be interpreted and used by stakeholders to make decisions about student instructional needs, namely classification based judgments about what abilities and knowledge students do and do not have. The findings suggest that EL educators are struggling to see the current score reports provided by ELP summative assessments as interpretable or instructionally useful, and many are receptive to alternative approaches to getting feedback about their students, particularly, feedback in the form of fine-grained, skill-based information about what students strengths and weaknesses are. In addition, creatively applied DCMs could be used to generate finer-grain feedback about students’ reading abilities than is currently being done, and the feedback provided is impressively reliable given that it was derived from a single test administration. However, the findings also make it clear that retrofitting of DCM models is not an uncomplicated procedure. There are a number of cautionary flags raised in this study in regard to this class of models and these particular assessments that requires further attention.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View