Connecting and Correcting the Flaws in (ESEA's) High Stakes Proficiency Assessments of Students' Academic Achievement and (OSEP's) the New Special Education Results Driven Accountability System
I hope that your January (and New Year) has gone well. . . and that you are focused on your student, staff, and school goals and outcomes as we enter the second half of the school year.
The ESEA Debate on High-Stakes Testing and the Federal Move to Results Driven Accountability
With a new Congress seated and in session, a lot of attention is focused on reauthorizing the Elementary and Secondary Education Act (ESEA) within the next two months. In fact, there have already been a number of Congressional hearings (CLICK HERE for related story), at least two draft “discussion bills” (one each from the Senator Lamar Alexander and Representative John Kline, the respective chairs of their chamber’s education committees), and a reauthorization policy speech and outline from Secretary of Education Arne Duncan.
Unfortunately, most of the public attention seems to be on whether the new ESEA should continue to require (a) annual high stakes assessments to determine students’ academic proficiency, and (b) the (flawed) use of these data to determine a school or district’s effectiveness. In fact, over the years, the debate has escalated to activism (CLICK HERE for related story) as student, parent, and other stakeholders have planned and carried out actions to “opt out” of these assessments.
_ _ _ _ _
In the parallel world of federal policy and practice, the U.S. Office of Special Education Programs (OSEP) is requiring every state education department to document its “Phase I” approach to the new Results Driven Accountability (RDA) process as represented in Indicator 17, (the State Systematic Improvement Plan; SISP). This is a required part of each state’s annual special education State Performance Plan/Annual Performance Report which is due in Washington, DC in the next few days (CLICK HERE for more information).
And so, along with the current or reauthorized ESEA, states, districts, and schools now must attend to the RDA initiative as described by Deborah Delisle, the U.S. Assistant Secretary for Elementary and Secondary Education and Michael Yudin, the Acting U.S. Assistant for Special Education and Rehabilitative Services in a joint May 21, 2014 letter to every state’s Chief State School Officer:
“The U.S. Department of Education is implementing a revised accountability system under the IDEA known as Results-Driven Accountability (RDA), which shifts the Department’s accountability efforts from a primary emphasis on compliance to a framework that focuses on improved results for students with disabilities. RDA will emphasize child outcomes such as performance on assessments, graduation rates, and early childhood outcomes. In the coming year, each State will be required to develop a State Systemic Improvement Plan (SSIP) as part of the State Performance Plan / Annual Performance Report that the State submits annually in accordance with the IDEA. In developing the SSIP, States will use data to identify gaps in student performance, analyze State systems, and then implement targeted, evidence-based reforms to address the gaps.”
_ _ _ _ _ _ _ _ _ _
The Flaw(s) of Single-Measure Accountability
One of the single biggest flaws in the federal (and, sometimes, state) approach to accountability is the dependence on a single measure. While a single measure may be acceptable as part of a screening to determine whether a student may be having academic or behavioral difficulties, or a progress monitoring approach to determine whether a student is making progress in a specific area, it is not psychometrically acceptable for program evaluation.
In fact, using a single measure for school or district accountability (i.e., a single test to measure students’ academic proficiency) increases the probability for the following additional flaws or inappropriate/ineffective schooling practices or perspectives:
* Concluding that a school (or student) is academically successful and that it is doing the “right things” that are “causing” its success, or concluding that a school (or student) is academically unsuccessful and needs to change some of its “ineffective” practices
* Taking a “top-down” perspective where the test-specific factors that make students successful are analyzed, rather than a “bottom-up” perspective that looks at the curriculum, instruction, teacher, and student factors that help students to learn, master, and be able to apply progressive levels of “real-world” knowledge, information, and skills
* Said a different way: Schools need to avoid teaching to the test, focusing instead on educating students for functional understanding and application. . . that is. . . they need to focus less on test results, and more on “real-world” educational results
* Focusing exclusively on a school’s academic program to the detriment of the social, emotional, and behavioral instruction needed to address student trauma and stress, attention and engagement, and project-based and cooperative learning group interactions
* Teaching students – especially academically struggling students – at their grade-level, rather than at their current functional skill, understanding, and/or instructional level
_ _ _ _ _
Relative to the special education Results Driven Accountability State Systematic Improvement Plan (SISP), we are already seeing state departments of education interpret the OSEP Annual Performance Plan’s Indicator 17 requirement whereby they are planning on using a single measure to assess the success of their SISP activities.
This is in spite of the fact that OSEP wants state to identify “measurable result(s) for children with disabilities (NOTE the plural possibility of multiple results or outcome measures).
For example, one state is thinking about using only the DIBELS to measure the literacy improvement of their students with disabilities. . . largely because the schools are already collecting DIBELS data. Beyond, as above, the flaws of using a single measure to assess systemic improvement, this largely ignores much of what was learned during the Reading First era. . . namely, that:
* The DIBELS is a screening and not a program evaluation tool; that
* It does not effectively measure literacy comprehension; and that
* The results of this process will likely be students who are better at decoding text, but not better at understanding it.
And so, if state departments of education measure their SISEP goal using (flawed) measures of convenience, their “systemic efforts” will not help students with disabilities to become more effective in the diverse areas of literacy. This, then, will negatively impact the students’ ability to demonstrate proficiency on their high-stakes assessment tests, and the vicious cycle with continue.
_ _ _ _ _ _ _ _ _ _
So. . . What do We do ???
In short, there are a number of possible solutions so these problems. For example:
* We need to communicate with our U.S. Senators and Representatives immediately, telling them to eliminate the single-test perspective of accountability in ESEA, and endorse a multi-faceted approach that evaluates schools on having effective curriculum and instruction, multi-tiered services and supports for academically struggling and behaviorally challenging students, outcomes-based professional development and teacher evaluation, and progress monitoring and evaluation systems that measure student learning, mastery, and growth.
* We need to communicate (nothing is “set in stone” yet) with the special education unit in our state departments of education to find out what their SISEP will focus on, and how its success will be measured . . . so that the “single measure mentality” does not predominate this important initiative.
* We need to remember the underlying science of effective program evaluation, and apply it in sound practice.
* We need to remember that educational effectiveness and excellence cannot be legislated, it must be planned, resourced, disseminated, and evaluated.
* We need to blend targeted outcomes with common sense when implementing services, supports, strategies, and programs at the student, staff, and school levels. If it doesn’t make sense, it probably won’t work.
_ _ _ _ _
In the end, we have two great opportunities with the reauthorization of ESEA, and the initiation of RDA. And yet, if we focus on politics, convenience, oversimplification, and rhetoric, our goals will not be attained, and we will add another layer of frustration to a process that has so many challenges and so many needs.
Please accept my THANKS for the great services and supports that you provide to your all students each and every day. Have a GREAT week !!!