Study: States Facing Challenges with Kindergarten Entry Assessments
Research reveals many factors affect validity and reliability of such exams

Contact: Tom Ewing                  Phone: 1-609-233-0090                 Email: mediacontacts@ets.org

Princeton, N.J. (May 9, 2018) – With at least 40 states developing and implementing kindergarten entry assessments (KEAs) in the last five years, questions arise over the best way to determine if such measures are valid and reliable. A new report from ETS reviews seven state KEAs, looks at lessons learned, and offers recommendations for future research to be conducted to further enhance their use by teachers and state policymakers.

The report, “Real World Compromises: Policy and Practice Impacts of Kindergarten Entry Assessment-Related Validity and Reliability Challenges,” was authored by Debra J. Ackerman, of ETS’s Policy Evaluation and Research Center. It is the sixth in a series of early childhood education policy reports that explore issues related to improving instruction and the use of assessment and other data in programs serving children ages 0–5.

Despite this collective focus on KEAs, there are many challenges in developing, piloting, field testing and implementing these measures. “Both the KEAs currently used and their content are not uniform,” Ackerman says. “Such variations are undoubtedly due to state control of K–12 education and differing standards for what kindergartners should know and be able to do. These variations also likely arise from the real world compromises that need to be made to bolster an assessment’s potential validity, reliability and utility for specific purposes, populations and settings.”

Ackerman points out that there is no general agreement on a single definition of what a KEA should do or the assessment approach to be used. Some states use direct measures, while others use observational rubrics. Some purchase commercially available measures while other states design their own. In all cases, validity and reliability are key.

“Although validity refers to the degree to which an assessment measures what it is supposed to measure, a key question to ask is how to design a measure so that it is maximally useful for its primary purpose, such as informing a kindergarten teacher’s practice or guiding state policymakers’ decisions. Reliability refers to the extent to which a measure provides consistent results over different assessors, observers and test forms. Regardless, a KEA should not be used to deny an otherwise age-eligible child’s entry into kindergarten.”

Ackerman used a case study approach which is particularly well-suited to identifying the “real world” issues that can change state KEA policies and practices. The study looked at KEAs in Delaware, Illinois, Maryland, North Carolina, Ohio, Oregon, Pennsylvania and Washington. They all represented different approaches to KEAs. Among the real-world challenges that caused states to refine or alter their programs were:

  • Aligning learning standards with KEA content
  • Floor and ceiling issues — e.g., items either too easy or difficult for students
  • Teacher feedback after administration
  • The “time crunch” kindergarten teachers faced administering these measures in the first few months of school
  • Insufficient training and technical assistance provided to teachers who served as KEA assessors or observers
  • Accommodating students with special needs or English-language learners
  • Online technology platform failures
  • Trouble accessing relevant data from state systems and figuring out how to apply that data in their classroom practice

“One purpose for conducting the study was to expand the early childhood education field’s understanding of the shaping role these validity and reliability issues can play,” Ackerman explains. “Another purpose was to highlight the importance of iterative research as a means for both uncovering these issues and informing the policies and practices that can impact KEA validity and reliability.

“The first key implication of this study is that bottom-up, “real-world” compromises will likely need to be made, especially as a measure is developed, piloted, field-tested or rolled out on a large scale basis,” Ackerman continues. “This may particularly be the case when the majority of teachers are not experienced users of the specific approach or measure used. In short, it may not be so much a question of `if there are validity and reliability concerns,’ but instead: `What are they?’ and `How might they be mitigated?’ Furthermore, such compromises may involve reconsidering a KEA’s content, administration timeline policies, and the training and technical assistance provided to teachers.”

While Ackerman could not identify a common research model for investigating both measure validity and reliability and policy and practice adequacy across all seven KEAs, she did note, “It could be that the more salient research implication is the importance of engaging in a customized plan, use, review and revise research model to generate data on what assessment programmatic inputs are — and are not — supporting KEA validity and reliability. Furthermore, this study’s results suggest the value of conducting research on an ongoing basis as opposed to only focusing on initial content or administration issues.”

“This study expands the early childhood education field’s understanding of validity and reliability issues that can shape evolving KEA assessment policies and practices. These assessments have the potential of contributing to teachers’ practice and policymaker efforts to close school readiness gaps,” said Michael Nettles, ETS Senior Vice President and Edmund W. Gordon Chair of Policy Evaluation and Research. “Policymakers, assessment developers, researchers, teachers and others who work directly with young children should work collaboratively to design, implement and use data from Kindergarten Readiness Assessments.”

The report also notes that the feasibility of conducting such validity and reliability studies also depends upon funding, and now that federal Race to the Top–Early Learning Challenge contracts are ending, states may be hard pressed to conduct such research.

Copies of the report are available from Wiley Online Library at https://onlinelibrary.wiley.com/doi/epdf/10.1002/ets2.12201


About ETS

At ETS, we advance quality and equity in education for people worldwide by creating assessments based on rigorous research. ETS serves individuals, educational institutions and government agencies by providing customized solutions for teacher certification, English language learning, and elementary, secondary and postsecondary education, and by conducting education research, analysis and policy studies. Founded as a nonprofit in 1947, ETS develops, administers and scores more than 50 million tests annually — including the TOEFL® and TOEIC® tests, the GRE® tests and The Praxis Series® assessments — in more than 180 countries, at over 9,000 locations worldwide. www.ets.org