Teenage boy attending to online school class

‘Caring’ Assessments: An Approach to Support Personalized Learning

Jesse R. Sparks Research Scientist at ETS
Blair Lehman Research Scientist at ETS
Diego Zapata-Rivera Distinguished Presidential Appointee and Senior Director of Research at ETS

Students embark on learning experiences with wide variation in their knowledge, skills and opportunities to learn and lived experiences. Good teachers acknowledge and celebrate this diversity — they know that the learning process is not one size fits all and strive for individualized, personalized instruction that meets students where they are and helps them move forward. While assessments often occur at the end point of the learning process, typical standardized assessments are not sensitive to this wide range of individual variation, nor to the contexts in which learning takes place. Just as for learning, this one-size-fits-all approach has clear limitations for assessment.

What if there was a more personalized digital assessment, that took such contextual and student-level differences into account and that posed an appropriate level of challenge, yielding tasks that are both more engaging to students and valid to support other uses of the data they provide (e.g., to inform instruction, to provide feedback, to offer just-in-time hints and so on)?

This is the vision we have for “caring” assessments — assessments that consider aspects of the student not taken into account with current standardized assessments. These aspects include knowledge, skills and other relevant cognitive, metacognitive and social-emotional characteristics (sometimes referred to as noncognitive attributes), and aspects of the learning context, to create assessment environments that offer appropriate conditions for students to demonstrate what they know and can do.

 

What are “caring” assessments?

“Caring” assessments would offer a tailored assessment experience, with different task configurations that can be assigned to students based on what information is available about them ahead of time, for example, prior knowledge. Within a formative context, personalized, “caring” assessments could also provide just-in-time supports to help students understand and access the task, to re-engage students who may experience disengagement, and to offer response formats that enable students to best demonstrate what they know and can do. In this approach, aspects of the assessment context are carefully considered so that the assessment itself can result in a positive, safe and motivating learning experience where students can not only demonstrate what they know, but also prepare for future learning in related domains.

Score reports and feedback from such “caring” assessments can also provide enriched information about students in order to offer students, teachers, parents and guardians a nuanced picture of students’ strengths and opportunities for growth. These reports would be contextualized in terms of both student characteristics and important changes to the assessment format that would affect interpretation and use of the assessment results. Feedback can be tailored based on what the assessment knows about the student, which could increase the likelihood that students, teachers and parents or guardians interpret the feedback as highlighting opportunities for growth, making it more likely that they would act on that feedback.

How could “caring” assessments support personalized learning?

“Caring” assessments could adapt dynamically to different student characteristics that go beyond the typical demographic information usually collected and reported by assessments —including contextual knowledge, motivation, self-efficacy and emotions.

Implementing these kinds of adaptations requires a strong understanding of relevant student characteristics — based on valid, reliable measures — in order to “tune” the assessment up front, as well as the ability to track student behavior in real time during the assessment in order to make dynamic moment-by-moment adjustments to the tasks. The assessment system must be able to detect relevant evidence and use this information to select and deliver the intended adaptation.

Some of these proposed adaptations would significantly change the structure of the assessment — for example, some students may answer additional questions that are not presented to other students or may be evaluated using somewhat different rubrics. The more that the assessment tasks are contextualized and personalized, the more difficult it is to compare performance in a standardized way across individuals. Despite this tension, we believe there is great potential in the proposed “caring” approach. We note that our vision of “caring” assessment is consistent with Bob Mislevy’s notion of a “conditional sense of fairness” — that is, considering “conditional fairness” in terms of using contextual information about students’ background to adapt assessment designs and scoring rules in order to obtain more nuanced evidence about the capabilities of diverse students in light of the contexts in which they are learning and the resources they bring to the learning experience.

Critical Questions for Implementing ‘Caring’ Assessments

While this vision for “caring” assessment appears straightforward, there are several critical questions that need to be answered in order to make such assessments a reality.

First, we must consider what set of student characteristics and contextual variables are most important to track within the student model? To answer this question, research should be conducted with large, diverse populations to examine how a wide range of characteristics interact with task performance and engagement.

Second, when issues with (poor) performance or (dis)engagement are detected, how and when should the system intervene? What adaptations will give students the best opportunities to demonstrate their knowledge and skills? To answer these questions, we need to test different modifications and examine which student subgroups benefit from which combination of supports or task variations; this work would be essential to ensuring that the modifications or interventions do not cause harm to any student subgroups.

Finally, what types of assessment results should be provided to different assessment stakeholders to support appropriate use of these results. What are the implications for providing scores resulting from caring assessments? How can we appropriately contextualize assessment results while maintaining good measurement properties? In other words, how can we enhance fairness and utility without sacrificing reliability and validity?

Our approach to designing “caring” assessments could yield highly nuanced, fine-grained information to be used to support instruction and further skill development given where students are, where they have been, and where they are going.

Jesse R. Sparks is a Senior Research Scientist at ETS. Blair Lehman is a Research Scientist at ETS. Diego Zapata-Rivera is a Distinguished Presidential Appointee and Senior Director of Research at ETS.