For many schools, the Every Student Succeeds Act (ESSA) offers the opportunity to demonstrate contributions to student growth — in addition to proficiency — when it comes to accountability. This is a critical improvement, because looking at how much students grow academically during the school year — regardless of their achievement level — provides a more complete view of school effectiveness.
One challenge that remains, however, in accurately evaluating a school’s impact on student growth is summer break. While away for the summer, students can lose a substantial amount of what they learned during the school year. Most states include this summer loss when measuring growth, because they commonly look at growth as a spring-to-spring change in summative test performance. This begs two big questions: (1) Does this practice provide an accurate view of a school’s contribution to student learning? and (2) Does it impact which schools are identified for improvement?
A recent study by the Collaborative for Student Growth at NWEA® sheds light on these questions. Using data from a state that tests students on reading and math in the fall and spring, the collaborative applied a new statistical model to examine the impact of using fall-to-spring versus spring-to-spring growth measures when estimating school contributions to student learning.
The study found the schools identified as in need of improvement were different, depending on whether fall-to-spring or spring-to-spring growth was considered. It also showed that when summer learning loss is accounted for, more student growth is attributable to schools. In other words, under the growth models used in many ESSA plans, schools are likely being designated as low-performing based, in part, on learning loss that occurs when students are not in school.
This merits more attention. The goal of accountability under ESSA is to obtain an accurate view of school performance, so that schools in need of assistance can get it and promising practices at more successful peer schools can be examined and shared. Does holding schools accountable for what happens during the summer support this goal? It’s especially worth considering in context of the fact that summer learning loss is more likely to occur in underserved communities with less access to summer learning and enrichment activities. Growth models that ignore summer learning loss have the potential to disproportionately hold schools in low-income areas accountable for factors outside of their purview.
There is no simple solution, because most states rely on annual, summative test results to measure growth, which means they lack access to fall-to-spring growth data (interim tests are needed to obtain that data). And even with access to within-year growth data, it can be complex to account for both it and between-year (spring-to-spring) growth in the same model.
That said, it’s a challenge worth grappling with, because schools and teachers deserve a solution. There are multiple angles to consider: Policymakers can advocate for consistent funding for districts in low-income areas to run evidence-based programs that help students sustain and even grow their learning over the summer. This should happen no matter what, given the role of summer learning loss in contributing to achievement gaps, but it’s especially important if schools are going to be held accountable for summer learning.
States can also explore innovative approaches to assessment that would allow them to consider fall-to-spring growth data in accountability. For example, the Georgia Department of Education is working with several groups of districts to explore alternative approaches to state assessment that would yield within-year growth data while fulfilling accountability requirements and increasing testing efficiency. NWEA is involved in this effort, and it’s exciting to be working in partnership to tackle this complex challenge. It’s what is needed to support the goal that states, districts and schools all share — better learning and outcomes for all kids.