The latest reauthorization of the ESEA, the Every Student Succeeds Act, or ESSA, provides a great deal of testing flexibility to states. At the high school level, it specifically allows districts (with state approval) to adopt “nationally recognized” exams in lieu of the state standardized exam. The alternative exams must meet state requirements for content and quality, and provide comparable data on student achievement for state accountability purposes.
The language was proposed in part to address concerns of over-testing by allowing consolidation to a single junior-level exam. ESSA specifically mentions the ACT and SAT – exams that most high school students are already planning to take. A state adopting the ACT or SAT for this purpose might see fiscal efficiencies: one exam is cheaper than two, especially one that may already be administered statewide. In addition, unlike state standardized assessments, college entrance exams like the ACT and SAT arguably have inherent value to students. With “skin in the game,” students are more likely to take the assessments seriously, thereby increasing the validity of a key state accountability measure.
Yet ESSA’s assessment flexibility provision brings into sharp relief a core tension in testing: range of content examined vs. utility to a specific purpose. Can a single assessment effectively serve two masters: measuring content mastery and gauging college readiness? The development of the college readiness standards aligned assessment systems PARCC and Smarter Balanced have certainly attempted to bridge this divide, providing high school assessments that are meant to be content-aligned and serve as a benchmark of college readiness. ACT and SAT have also staked their claim in this market, unveiling new “double duty” products: ACT with ACT Aspire, an assessment system meant to align to a state’s college readiness standards, and SAT with a newly redesigned exam, meant to more closely reflect high school content knowledge.
The core concern around “double duty” assessments is that by attempting to serve two purposes, they may end up serving neither. Recent research studies from AIR, HumMRO, and the Fordham Institute show that college readiness assessments vary both in their ability to accurately reflect student attainment relative to the Common Core and in their rigor in comparison to NAEP. The ESSA language introduces a new policy dynamic by creating space for district-based “opt out” measures rather than statewide assessment substitutions. While several states have already proposed bills to extend this flexibility to their districts, it remains to be seen how local choice will be reconciled with statewide accountability requirements.
The Department of Education has released several discussion papers in preparation for the negotiated rulemaking sessions that will set guidelines for ESSA implementation in three areas: standards, assessments and supplement-not-supplant. The brief addressing high school assessment flexibility raises a critical question: how will states ensure that the alternative assessments are content-aligned, provide comparable achievement scores and set equal expectations across districts? Only time will tell what the ESSA regulations will actually allow in terms of high school assessment flexibility and whether any of the current palette of college-readiness assessments will be up to the task.