Testing, Teaching and Technology: Connecting the Dots

By Andrea Orr

One challenge teachers face building effective curriculums is identifying not just the skills students should master, but the likely pathways they follow toward mastery. Becoming a proficient writer, for example, requires mastery of multiple skills from basic spelling and grammar, to the ability to write persuasively.

A learning progression is a description of how students’ competencies develop from novice to mastery. Assessments have not always been able to identify where students are on this path toward expertise, but technology is increasingly making this possible.

Here Joanna Gorin, Vice President of Research at ETS, discusses how assessments might incorporate technology to show more about where students are on the path toward expertise, and how educators might use this information.

 

What is the main thing you wish teachers understood about assessments?

As an assessment publisher and research organization, part of our responsibility is to support teachers’ understanding of how these tests and the scores they provide can best be used to improve learning, and we’re constantly looking at how to do this better.

This is important because what often gets lost when talking about assessments is the power they can have in helping teachers guide their students on a positive learning path.

How can we advance that understanding?

The first thing that those of us in the assessment industry have to do is ensure we know what is going on in schools, with teachers and with students, so that we can design tests that reflect the way educators and learning scientists are thinking about the meaning of mastery of targeted skills and competency. Are we focusing on the right competencies and are we thinking about them in the right way to help teachers and students with our assessments? For example, recently some next-generation science standards have encouraged educators to think about science in terms of the integration of core scientific ideas, cross-cutting skills and the practices that define doing science, rather than just knowing about science and scientific facts. This suggests that we’ll have to teach and measure achievement in science in a more complex way that reflects the integration of the ideas, skills and practices.

Recently, while writing about all the challenges of education that we don’t yet know how to solve, you said “data excites me.” Since we’re talking about some of the ways technology can enhance assessment, explain why you find data exciting as an assessment tool.

I see the potential for data to be useful information — evidence of what we want to say about a student’s or teacher’s current mastery. But I also understand we have to be careful. Data, in and of itself, is not necessarily useful information. Not all data is created equally. The goal of assessment to support learning is to understand where students are now in terms of their mastery and how we can help them progress. That requires a very special kind of data, that is, data that comes from carefully designed activities, or test questions that we know are closely tied to the skills we want to measure.

What are some of the recent developments in data that can make assessments more effective tools for helping students progress toward mastery?

A lot of people are very excited about the data that shows how students go through an activity or solve a problem. Knowing how a student arrived at an answer or final work product is often more meaningful than the product itself. For example, when assessing writing, this sort of test would measure not just the finished text, but would also capture how students went about composing their writing. How much time do they spend on planning, on editing? Do they write in bursts, or are they struggling to write each word? The data that can be captured by current technologies will at least allow us to see how those actions reflect more or less effective writing strategies.

This is not to say that all writers should compose an essay the same way. Just as it is not necessarily the case that all people would similarly judge the quality of a piece of writing. It is certainly true that human judgment of “good writing” can be somewhat subjective or difficult to define in terms of its characteristics. I think one of the clearest examples of this is poetry or creative writing. One aspect of their quality is whether the writing interests and engages us — does it strike a personal chord with the reader? For genres where quality is less-objectively defined, we are unlikely to use machines (though truth be told, those are even hard for humans to score reliably).

But it’s important to note that there are also certain principles of effective writing, such as the clarity of claims and effective use of evidentiary support for a persuasive argument that most educators and researchers agree upon. These are the characteristics that define good writing that are less subjective than we think they are.

When it comes to learning progressions, what can we learn from analyzing not only a student’s response but the way he or she arrived at that response?

A learning progression is a developmental model that describes qualitative differences in how a student has mastered, or reasons about, a certain competency. It helps to make more explicit what would be characteristic of a novice, versus a person who is emerging, versus an expert. If my goal is to help students improve and move along that continuum, I have to know what those different levels of mastery look like and I have to connect these levels and behaviors to activities that teachers can observe. Assessment is one tool that can help make it clearer where a student might be on a learning progression.

Almost any assessment that we administer with technology can enable us to get more information about how the student is going about solving the problem. Here’s a simple example: With technology, we can capture how long it takes someone to give a response or take an action solving a problem. And that may provide insight into what they are doing that leads to a right or wrong answer. If they answer really quickly and get it wrong they are possibly just guessing, or at least not even engaged. But, if they are taking a long time, they are probably really trying. Wouldn’t this change how a teacher might interpret a student’s response? Would the teacher change their instruction differently?

Now, what if I design a science test where the student really needs to interact with the computer, dragging and dropping text and participating in conducting virtual experiments? All that produces a trail and we can collect information that tells us, for example, how systematically students are testing a hypothesis.

One of the hopes for computer-based tests is that we will be able to tie together what students did to reach a conclusion. It helps us to shift from assessing students to identify their deficits to measuring students to know what they can do, even if it is not all we hope it will be.

Has assessment kept up with technology?

I certainly think that many people would say that advances in technology can be seen in other industries more than they can be seen in testing. But I think that in many ways that is for good reason. We have to be really careful about how we introduce technology into educational assessment, particularly when there are consequential stakes tied to test performance. It can be an incredible source of inequity. Students who have less exposure to or experience with technology may struggle to understand how to interact with the computer or tablet. Unfortunately, access to technology is not evenly distributed across all students. Traditionally underserved students, particularly those from high-poverty districts and communities, are less likely to have experience using technology for learning or assessments.

There are also some logistical barriers to incorporating technology. If we think about technology-based assessment for K–12, it has to be something that every school has access to and it has to be reliable and affordable. I am a huge proponent of technology-based assessment. But these are some of the challenges that explain why we haven’t yet fully embraced all the technology that’s available. We have to find a way to balance all of these concerns.

That said, the data that technology can give rise to can be very valuable. Given that most people interact extensively with technology every day, incorporating more technology into assessments has the potential to make tests that more closely mirror the world they live in.

What about all the qualities that are likely to lead to success in school and life that assessments do not measure?

At ETS, it is our job to produce tests that provide valid and reliable scores reflective of what our educators want to measure. Are cognitive abilities the right thing to measure, or should we focus on social or emotional abilities? After all, many people would say that educational systems should go beyond focusing on traditional academic abilities, but also prepare students to be effective communicators, good collaborators and actively engaged citizens. But those are not the tests that we are usually asked to generate. We can do, and are doing, research to try to see whether any of those other (social-emotional, inter-intrapersonal) competencies are relevant for education stakeholders. And that is where we really need communication and collaboration with academic communities, education policymakers, curriculum providers and educators.

How might assessment aligned to learning progressions help schools make better decisions about resource allocation, curriculum and professional development?

The only way for assessments to have a positive impact on decisions is for them to be part of a larger, coherent system in which classroom curriculum, instructional design and teacher professional development are all coordinated around a common model of learning. Learning progressions provide the basis for the coherence of the system. Assessment can help shine a light on areas of strength for students, as well as where students are consistently struggling to master the curriculum. Or, if an entire class or school is showing a similar pattern of weaknesses, perhaps the instructional design or curriculum could be reevaluated.

How you tie information about student learning in terms of learning progression levels to policy decisions is a bit more complicated. But if we see large numbers of students are consistently misunderstanding certain concepts, it becomes clear that these are the areas we need to work on.