Saturday, October 7, 2017

Improving Student Outcomes When Your State Department of Education Has Adopted the Failed National MTSS and PBIS Frameworks (Part I of II)



Effective and Defensible Multi-Tiered and Positive Behavioral Support Approaches that State Departments of Education Will Approve and Fund

Dear Colleagues,

Introduction

   Over the past six weeks, as I consult around the country and do telephone conference calls with high-powered district and school leaders, the development of sound and effective multi-tiered services—to help academically struggling and behaviorally challenging students—keeps coming up.

   The biggest reasons for this are:

   * Schools are facing even more at-risk, unprepared, underachieving, unresponsive, and unsuccessful students than ever before.

   * Schools have limited intervention resources and resource people. . . but the resources they have often do not have the deep and intensive intervention expertise that they need, or they are not strategically deployed so that they can provide the intensity of services needed by their students.

   * Schools are using (or are “required” by their State Departments of Education to use) obsolete and originally scientifically-unsound frameworks in the areas of MTSS (Multi-Tiered Systems of Supports), PBIS (Positive Behavioral Intervention and Supports), and RtI (Response to Intervention).
_ _ _ _ _

   Right now, I am in California for two weeks . . . working with two school districts with high numbers of students from poverty . . . who are also ravaged by any number of trauma-related life crises and events. 

   These districts have some incredibly talented related services professionals—counselors (some of whom are actually clinical social workers or marriage and family therapists), school psychologists, applied behavior therapists, special educators).  

   But even these professionals (a) have intervention gaps, that (b) are compounded by unsound MTSS/RtI implementation processes (advocated by their state through professional developers at their County School District), that (c) are based on the U.S. Department of Education’s (largely through the Office of Special Education Programs—OSEP) MTSS, PBIS, and RtI frameworks.  Critically, many independent studies—INCLUDING those commissioned by the U.S. Department of Education—have shown that the MTSS, PBIS, and RtI frameworks separately and collectively result in trivial, questionable, unsustainable,—if not negative outcomes for students.
_ _ _ _ _ _ _ _ _ _

How to Educate Your State Department of Education . . . When it has Embraced (or Mandated) Unsound MTSS, PBIS, or RtI Approaches

   But beyond my recent consulting work, I also help school districts write grants so that they can secure state, federal, and foundation money.

   Last month, I collaborated with a district applying for state department of education school improvement funds to beef up their multi-tiered system of supports. 

   The Problem:  The state (like many states) had actually codified the U.S. Department of Education’s faulty MTSS and PBIS frameworks into their state education law/statutes . . . and the grant RFP appeared to require unsound practices that anyone with psychometric, implementation science, and systems scale-up knowledge and experience would know would not work, and would either delay services to or educationally harm students.

   In writing the RFP, we addressed this situation by:

   * Presenting the research-to-practice data and results that invalidated the unsound practices in the state’s framework;

   * Detailing the research-to-practice data and results that validated our proposed effective practices; and

   * Framing our proposal as one with “valued-added” procedures, services, supports, strategies, and interventions that would (a) build on the defensible ones in the state’s statute; (b) improve upon or substitute for the indefensible ones; (c) help more effectively and efficiently meet the grant’s “ultimate” student-focused outcomes; and that might (d) require some levels of waivers (if needed).
_ _ _ _ _

   Below (with some minor editing), I will begin to share the sections of our actual proposal that are most-relevant to this two-part Blog discussion.

   In this first—Part I, I will share the proposal’s description of the district and state’s current MTSS system.  This is followed by a section that we titled, Why the RFP as Written will not Succeed.  Finally, the proposal discussed Seven Flaws that Need Attention in a Multi-Tiered Services Re-Design.

   In Part II of this Blog discussion (posted in about two weeks), I will share the proposal’s section addressing Ten Resulting Practices that Need Inclusion in a Multi-Tiered Services Re-Design, and make some concluding comments.

   So. . . let’s begin.
_ _ _ _ _ _ _ _ _ _

From the Grant Proposal:  Describing the District and State’s Current MTSS System

   The Anytown (obviously, a pseudonym) Public School District's instructional staff is responsible for successfully implementing and sustaining a Multi-Tiered System of Supports (MTSS) to accelerate and maximize students’ academic and social-emotional outcomes through the application of collaborative data-based problem solving utilized by effective leadership at all levels of the educational system.

   The MTSS process is coordinated by the District’s Office of Curriculum and Instruction which implements the relevant MTSS policies and procedures relative to State Board Policy XXXX.  This Office also provides MTSS professional development for school-based teams, administrators, staff, and parents.

   In addition, the Office of Curriculum and Instruction, as well as the Office of Special Education, offer guidance on appropriate intervention data collection, data-based decision making, evaluation, and progress monitoring for students in need of supplementary intensive academic and behavioral supports in order to ensure all students graduate high school college and career ready.

   The State’s MTSS process involves a Three Tier Instructional Model designed to meet the needs of every student.  The tiers of instruction involve the following:

   Tier 1: Quality classroom instruction based on State Curriculum Frameworks.      
   Tier 2: Focused supplemental instruction.
   Tier 3: Intensive interventions specifically designed to meet the individual needs of students.

   If strategies at Tiers 1 and 2 are unsuccessful, students must be referred to the Teacher Support Team (TST). The TST is the problem-solving unit responsible for interventions developed at Tier 3.  Each school within the Anytown Public School District has a Teacher Support Team in accordance with the process developed by the Department of Education.

Explicit and Implicit Goals of the RFP

   While the explicit goals or deliverables of the RFP involve one year of “awareness-building” MTSS professional development and follow-up, the implicit goals of the RFP involve the academic and social, emotional, and behavioral success of all of the students in the targeted schools.  These schools are currently “Focus” schools—in need of improvement—based on the State’s Accountability Model.
_ _ _ _ _ _ _ _ _ _

From the Grant Proposal:  Why the RFP as Written will not Succeed

   While our organization clearly has the capacity, and is happy to provide the professional development requested in the RFP (thereby meeting its explicit goals), it firmly believes that this will not help the District to accomplish its implicit goal:  the academic and social, emotional, and behavioral success of all of the students in the target schools. 

   And while we will address the major thrust of the RFP later in this Narrative, we believe that the District first needs to “add value” to the MTSS process currently recommended by the State. 

   Below is a discussion of ways to “upgrade” the current MTSS approaches so that the Professional Development in the RFP has the greatest chance for success.
_ _ _ _ _ _ _

The Elementary and Secondary Education/Every Student Succeeds Act and Multi-Tiered Services

   The Elementary and Secondary Education/Every Student Succeeds Act (ESEA/ESSA) was signed into law by President Obama on December 10, 2015.  Most notably, the Law transfers much of the responsibility for developing, implementing, and evaluating effective school and schooling processes to state departments of education and school districts across the country.  It also includes a number of specific provisions to help to ensure success for all students and schools.

   Relative to at-risk, disengaged, unmotivated, unresponsive, underperforming, or consistently unsuccessful students, ESEA/ESSA defines and requires districts and schools to establish a “multi-tiered system of supports” for specific groups of students.

   Significantly, the term “response-to-intervention” (or RtI) or any of its derivatives never appears in the new ESEA/ESSA. 

   [Parenthetically, this term similarly, never appears in the federal Individuals with Disabilities Education Act (IDEA).]

   Even more significant is the fact that the term multi-tiered system of supports”—which appears only five times in the Law, always appears in lower case letters, and NEVER appears with a capital letter acronym:  MTSS.

   Thus, the MTSS framework advocated by the U.S. Department of Education’s Office of Special Education Programs (and its many funded National Technical Assistance Centers, as well as many State Departments of Education) is NOT REQUIRED by federal law.
_ _ _ _ _

   ESEA/ESSA defines “multi-tiered system of supports” as:

“a comprehensive continuum of evidence-based, systemic practices to support a rapid response to students’ needs, with regular observation to facilitate data-based instructional decision-making.”

   Relative to the five times the term appears in the Law, two appearances are in the definition as above.  The other three citations appear in sections where the Law talks about the need for all districts receiving ESEA funds to:

   * “(F) (D)evelop programs and activities that increase the ability of teachers to effectively teach children with disabilities, including children with significant cognitive disabilities, and English learners, which may include the use of multi-tier systems of support and positive behavioral intervention and supports, so that such children with disabilities and English learners can meet the challenging State academic standards.”

   * “(4) Provid(e) for a multi-tier system of supports for literacy services.”

   * Offer professional development opportunities that “(xii) are designed to give teachers of children with disabilities or children with development delays, and other teachers and instructional staff, the knowledge and skills to provide instruction and academic support services, to those children, including positive behavioral interventions and supports, multi-tier system of supports, and use of accommodations” . . .
_ _ _ _ _

   Meanwhile and relatedly, the term ”positive behavioral intervention and supports” (which appears twice in the quoted sections above) is NEVER defined in ESEA/ESSA.  While it appears in the 2004 reauthorization of IDEA, it is NOT defined there either. 

   Significantly, the term “positive behavioral interventions and supports” appears only THREE times in the entire ESEA/ESSA law—always in lower case letters.  That is, the term NEVER appears with the individual words capitalized, the PBIS acronym NEVER appears, and the word “framework” (as in PBIS framework) NEVER appears in the law.

   Thus, as with MTSS, ESEA/ESSA DOES NOT REQUIRE the PBIS framework or program advocated by the U.S. Department of Education’s Office of Special Education Programs, its many funded National Technical Assistance Centers, as well as many State Departments of Education.
_ _ _ _ _

   The “Bottom Line” in all of this is that every State Department of Education across the country that accepts federal funds:

   * Must develop its own multi-tier system of supports—at least for the conditions described in the Law above (clearly, they can go beyond the Law);

   * Is not required to adopt the U.S. Department of Education’s Office of Special Education Programs MTSS framework, and should not be penalized financially as long as their approach meets the definition and conditions above;

   * Needs to revisit and revalidate its multi-tiered system of supports to ensure that the services, programs, strategies, and interventions being used meet the other facets of ESEA/ESSA—that is, to ensure that students with disabilities, with developmental delays, who are English learners, and who are struggling with literacy can meet the challenging State academic standards.
_ _ _ _ _ _ _ _ _ _

From the Grant Proposal:  Seven Flaws that Need Attention in a Multi-Tiered Services Re-Design

   In order to meet the “Bottom Line” above, state departments of education and school districts nationwide must recognize that a number of federal reports have demonstrated that the federal RtI and MTSS frameworks have not been successful.  For example:

   Balu, R., Zhu, P., Doolittle, F., Schiller, E., Jenkins, J., & Gersten, R. (2015). Evaluation of Response to Intervention Practices for Elementary School Reading (NCEE 2016-4000). Washington, DC: National
Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.

   CLICK HERE for Publication

   Thus, as state departments of education and districts rethink their multi-tiered system of supports, they need to recognize and correct the flaws that have undermined the success of previous RtI and MTSS approaches.

   Below are seven flaws that need attention in the re-design process.  Many of these flaws were identified through an extensive review of the currently existing state RtI or multi-tiered services guidebooks and systems.
_ _ _ _ _

Flaw #1.  Missing the Interdependency between Academics and Behavior

   When teachers have academically or behaviorally struggling students, there are two initial critical questions:

   * Do you have students who are behaviorally acting out because of academic frustration?

   * Do you have students who are academically not learning (or not learning quickly enough) because they do not have certain behavioral skills (sitting in their seat, paying attention, working in interpersonally effective ways with others)?

   When they answer "Yes" to both questions (which is the norm), they are demonstrating (per usual) that academic instruction, learning, and mastery is interdependent with classroom discipline, behavior management, and student self-management.

   Thus, it does not make sense for a state or district multi-tiered process to focus only on academic skills. . . to the exclusion of students' social, emotional, and behavioral skills.  

   We have seen this time and time again—as schools have separate problem-solving teams for academic and behavioral problem students, respectfully.  When this happens, the “academic team” only assesses for “academic problems,” and the “behavioral team” only assesses for “behavioral problems.”  The flaw in this process occurs when a student, for example, is behaviorally acting up because of academic frustration.  Here, the behavioral team typically misses the underlying academic conditions that are triggering the student’s behavioral response (because they don’t assess them), and then they try to treat the behavioral problem as a “discipline problem” rather than one that requires an academic intervention component.

   Conversely, the academic team does not typically ask whether students’ academic struggles are occurring because they do not (a) have the social skills to get along with others (e.g., in a cooperative learning group); (b) feel emotionally secure in class (e.g., due to teasing or school safety issues); or (c) have the behavioral skills to organize themselves (e.g., to work independently).  When students have social, emotional, or behavioral skill deficits, even the best teachers, curricula, technology, and instruction may not result in the desired academic outcomes.

   The “Bottom Line” is that schools should have the best academic and social, emotional, and behavioral assessment, instruction, and intervention experts in and available to the school on their school-level Teacher Support Teams.  When this occurs, questions regarding the interdependency between a student’s academic and behavioral status and contributions to specific situations will most assuredly be asked.
_ _ _ _ _

Flaw #2.  Missing the Continuum of Instruction

   Many state RtI or multi-tiered services guidebooks and systems do not provide a research-based continuum of services and supports that helps to organize and differentiate the difference between "instruction" and "intervention."  These guidebooks talk about the need for intervention, but rarely provide any specificity.

   Over the past decade (or more), we have presented this continuum to states, districts, and schools across the country—presenting it as the PASS (Positive Academic Supports and Services) model.

   As is evident in the slide below (see Figure 1), RtI or multi-tiered services start with an effective teacher providing sound, differentiated instruction, supported by good classroom management, and the data-based progress monitoring of students' academic and behavioral learning and mastery.  

Figure 1.

   When students are not learning (or learning quickly enough), an assessment process must be conducted to determine why the progress is missing (see Flaw #3 below).  This assessment could be done (a) by the teacher, (b) with the support of grade-level colleagues as part of a Grade-level Teacher Support Team, or (c) with the support of the multidisciplinary Building-level Teacher Support Team.  How the teacher assesses the problem is determined largely by his/her skills, and the duration or intensity of the problem (see Flaw #7 below).

   Once the underlying reasons for the problem have been validated, the teacher (once again—by him or herself, supported by grade-level colleagues, and/or with members of the Building-level Teacher Support Team) strategically decide how to solve the problem (see Flaw #4).

   If the student’s struggles are academically-related (as opposed to behaviorally-related), as in the Figure above, the problem may be solved through strategically-selected:

   * Assistive support technologies
   * Remedial approaches
   * Accommodation approaches
   * Curricular modification approaches
   * Targeted Intervention
   * Compensatory strategies

   When students are demonstrating social, emotional, or behavioral problems, a comparable continuum is used (after completing the needed functional assessments) that consists of strategically-selected:

   * Skill Instruction strategies
   * Speed of Learning and Mastery Acquisition strategies
   * Transfer of Training strategies
   * Emotional Control and Coping strategies
   * Motivational strategies
   * History of Inconsistency strategies
   * Special Situation (Setting, Peer group, and Trauma or Disability-related strategies
_ _ _ _ _

Flaw #3.  Avoiding Diagnostic or Functional Assessment until it is Too Late

   Many state RtI or multi-tiered services guidebooks, adopting the flawed approaches of the U.S. Department of Education's MTSS, PBIS, and RtI Intervention Technical Assistance centers, advocate for a "wait to fail, then assess" strategy.  That is, when students are not succeeding academically (for example) at Tier 1, they recommend 30 minutes of largely unspecified group interventions at Tier 2.  Then, if the students are still having problems, they recommend a diagnostic (or, for behavior, functional) assessment as the entry point to Tier 3.

   Significantly, this is the opposite of the “early assessment, early intervention” approaches in most other professions.  Indeed, when called to solve a problem, virtually every doctor, electrician, car mechanic, or other service-providing professional completes a diagnostic assessment at the beginning of the problem-solving process. . . to ensure that their first recommendations are their last recommendations (because the problem is solved).

   And so. . . why would anyone, in good conscience, "allow" a student to struggle for six to ten or more weeks in the classroom, and in a Tier 2 intervention, to the point where a diagnostic assessment is finally conducted to figure out what really is wrong?  

   And why would anyone do this knowing that, after these multiple and prolonged periods of “intervention” and failure, (a) the problem may be worse (or compounded); (b) the student might be more confused or frustrated or resistant to “another intervention”; and (c) a more intensive intervention might be needed because the problem was not identified and analyzed right from the beginning?
_ _ _ _ _

Flaw #4.  Not Linking Assessment to Intervention

   Many state RtI or multi-tiered services guidebooks and systems do not delineate the different types of assessment procedures that are typically used in the field (e.g., screening versus progress monitoring versus diagnostic versus implementation integrity versus high stakes/proficiency versus program evaluation assessments).  This often occurs because state departments of education write their guidebooks to meet a statutory requirement . . . rather than to educate their practitioners. 

   Relative to RtI processes that effectively help students with academic or behavioral difficulties, state guidebooks and systems typically do not emphasize the importance of linking diagnostic assessment results with the instructional or interventions approaches that have the highest probability of success.

   Critically, when school practitioners do not strategically choose their student-focused instructional or intervention approaches based on reliable and valid diagnostic assessment results, they are playing a game of "intervention roulette."  And, as in Vegas, the "house" usually wins.  But, in the classroom, the loss is the student's loss.

   Indeed, it is essential to understand that:

   Every time we do an intervention that does not work, we potentially make the problem worse, and the student more resistant to the next intervention.

   Said a different way:

   Intervention is not a benign act. . . it is a strategic act.  We should not be satisfied, professionally, because we are implementing interventions.  We should be satisfied when we are implementing the right interventions based on the right (reliable and valid) assessments, that result in the highest probability of success for an accurately identified and analyzed problem.
_ _ _ _ _

Flaw #5.  Focusing on Progress Monitoring rather than on Strategic Instruction or Intervention Approaches

   Many state RtI or multi-tiered services guidebooks and systems overemphasize progress monitoring. . . and then, they compound this flaw by overemphasizing curriculum-based measurement (CBM) to the exclusion of other curriculum-based assessment (CBA) approaches.

   Moreover, most of the progress monitoring examples—in the state guidebooks that we have extensively reviewed—are solely in the area of reading decoding and fluency (where the progress monitoring research has been most prevalent).

   Rarely do you see state guidebooks discuss progress monitoring for vocabulary and comprehension. . . not to mention the lack of progress monitoring examples in the different areas of math, written expression, spelling, and oral expression.  This is because progress monitoring using CBM approaches do not work well here. 

   Finally, most state guidebooks do not explain how to effectively create (or evaluate the acceptability of) a progress monitoring probe.  That is, they do not emphasize that progress monitoring approaches must be strategically-selected for the assessment outcomes that they can actually deliver.  The “Bottom Line” here is that progress monitoring approaches must be connected to specific instructional or intervention goals, outcomes, and implementation strategies.  

   As noted earlier, progress monitoring is an assessment/evaluation approach.  Thus, for students with academic or behavioral problems, it occurs within the context of a data-based, functional assessment problem-solving process.  Unfortunately, some educators still believe that progress monitoring is the intervention.  Or, they believe that the intervention must fit the progress monitoring tool adopted by the district—rather than the tool being fit to the instructional or intervention outcomes desired.
_ _ _ _ _

Flaw #6.  Establishing Rigid Rules on Student's Access to More Intensive Services

   It is not problematic when a state RtI or multi-tiered services guidebook outlines a blueprint on the prototypical sequences and decision rules that teachers need to follow to "move" students from Tier 1 to Tier 2 to Tier 3.  However, there is a problem when the sequence must be followed in a rigid, fixed way.

   Simplistically, there are two types of students with academic or behavioral problems: students with progressive, longstanding, or chronic problems; and students with significant, severe, or acute problems.

   For the latter students especially, they often need immediate and intensive (Tier 3, if you will) services, supports, strategies, and/or programs.  They (and their teachers) should not have to go through a series of intervention layers (i.e., from Tier 1 to Tier 2, in order to “qualify” for Tier 3) so that they eventually receive the intensity level of the services that they need.

   We all "get" that many administrators worry about an influx of inappropriate referrals to their Building-level Teacher Support Team.  But, if you break your leg, you need to go to the emergency room.  If you try to fix it yourself, or delay the intervention services needed, you may get an infection and lose the whole leg.

   The “Bottom Line” is that students who are in the general education classroom and curriculum (i.e., Tier 1), and who need immediate, intensive (Tier 3) assessment and interventions should receive that level of services and supports without having to go sequentially from Tier 1 to Tier 2 to Tier 3.

   The "trick is in the training."  Districts and schools need to create collaborative systems where everyone in the school is trained on the data-based problem-solving process.  And at the root of the process is a culture that supports early assessment and intervention through "problem solving, consultation, intervention" strategies that are accompanied by a "check and balance" approach that minimizes the number of capricious referrals to the Building-level Teacher Support Team.

   In our 35+ years of school-based experience, this works.  And the results are that (a) more students receive earlier and more successful instructional and intervention approaches; and (b) more general education teachers are leading the entire process. . . with greater enthusiasm, involvement, self-direction, and success.

   Isn't this the true goal of a multi-tiered system of supports? 
_ _ _ _ _ _ _ _ _ _

Flaw #7.  Setting a "Price" on Access to Multidisciplinary Consultation

   To expand on the “Bottom Line” in Flaw #6 above:  If a student needs to be immediately considered by the multidisciplinary Building-level Teacher Support Team, then this should occur without the need for a certain number of interventions, implemented for a certain number of weeks, under a certain set of conditions.

   Too many state RtI or multi-tiered services guidebooks and systems have created arbitrary decision rules that govern (or “set a price” for) how and when students can be discussed by the Building-level Teacher Support Team. 

   For example, a common one is: 

   Students cannot be discussed with the Building-level Teacher Support Team unless (for example) three interventions have been implemented by the general education teacher in his or her classroom, for a least three weeks each, and where the progress monitoring or outcome data have clearly demonstrated no student progress.

   First of all, there is no research anywhere that validates this decision rule.

   Second, the instructional or intervention approaches needed by students should be based on functional assessments.  Moreover, the length of time needed to demonstrate each approaches’ impact will vary by (a) the problem, (b) its history, (c) its status (chronic or acute), (d) the research associated with the approach, and (d) the intensity (e.g., how many times per week) of the approach’s implementation.

   Third, this decision rule often results in general education teachers—who have done everything that they know to do—implementing approaches that they have found on the internet or that were recommended “by a colleague” that have no hope of success, and that (as discussed above) actually make the problem worse and the student more resistant to the next intervention.

   On one hand, this decision rule is like posting an armed guard at the door of an emergency room who allows access only to those patients—all in immediate need of these critical services—who have previously tried three interventions for three weeks each.

   On the other hand, this decision rule is more about controlling the process (that is, minimizing the number of problem-solving or special education referrals), than providing early, effective assessment and intervention services to students in need.
_ _ _ _ _

   But, there is one additional extension.  If a teacher needs a consultation with a colleague in order to better understand and work with a student, there should not be restrictions on what colleagues are available.

   To be more explicit:  Some district RtI or multi-tiered services guidebooks and systems do not allow, for example, general education teachers to consult with special education personnel (teachers, OTs, PTs, speech pathologists, etc.) until a student needs "Tier III" attention. 

   Sometimes, the reasons for restricting this consult include:

   * “The special education teacher (OT, PT, etc.) is paid through federal special education funds that don't allow the consultation to occur earlier."

   * “We don’t want to bias the special education professional now, when they might have to make a special education eligibility decision later.”

   * “Our special education personnel just do not have the time to provide these consultations over and above their already-full caseloads.”


   None of these reasons make sense—especially if a consultation early in the multi-tiered process results in "Tier 1" success . . . thereby eliminating the need for more strategic Tier 2, or more intensive Tier 3, assessment and/or intervention attention.

   Moreover, relative to the first reason above, this is simply not true.  

   Even with the most extreme interpretation, IDEA encourages early intervening services, and it allows districts to use up to 15% of their special education funding for services and supports that are not directed to students with a disability.  Thus, if needed, a district could allocate up to 15% of the FTE of its IDEA-funded personnel for general education teacher consultation, assessment, and intervention.
_ _ _ _ _ _ _ _ _ _  

Summary and Next Steps

   As noted earlier, in Part II of this Blog discussion (posted in about two weeks), I will share the proposal’s section addressing Ten Resulting Practices that Need Inclusion in a Multi-Tiered Services Re-Design, and make some concluding comments.

   Meanwhile, I hope that this discussion has been useful to you.

   In fact, to make it most useful, I recommend the following:

   * (Re)Read your state’s multi-tiered system of academic and behavioral support laws, statutes, and implementation guides.  Look for the flexibility (if present) in these documents where your state says, “This is recommended,” as opposed to “This is mandated.”

   Many departments of education overstate what is actually required by law, by making their recommendations sound like they are mandated.  More often than not, state department of education recommendations are actually advisory (the U.S. Office of Special Education Programs does this all the time).  And even if they are mandated, districts can always apply for a waiver.

   Said a different way:  You want to find the multi-tiered areas of flexibility— where you can create your own procedures and approaches—as long as they are defensible, and result in definitive student outcomes.
_ _ _ _ _

   * Analyze your state’s multi-tiered academic and behavioral process, as well as your district’s process, against the Flaws above to determine if you are (inadvertently) following procedures or practices that are represented in one or more of the Flaws.

   Remember, one of the only ways to change is to first acknowledge the presence of a problem.
_ _ _ _ _

   * Finally, in a data-based way, look at how the flaws that are present have actually (negatively) impacted your students (and staff)—relative to, for example, academic or behavioral outcomes, delaying services or supports, or making the original problems more complex or resistant to change.
_ _ _ _ _

   Please understand that I am not trying to be critical of your multi-tiered programs, strategies, or approaches.  But I am strongly recommending that you complete an objective and independent analysis based on the information in this Blog.

   In the end, we need to implement programs in our schools that have the highest probability (and actuality) of success.

   We cannot figuratively play “Intervention Roulette”—hoping that the multi-tiered processes that are mandated, or that we create to meet those that are mandated, will “work” with our children and adolescents. 

   We must use processes that have actually demonstrated successful science-to-practice outcomes— based on sound psychometric, implementation science, and systems scale-up principles and practices.
_ _ _ _ _

   Meanwhile, I always look forward to your comments. . . whether on-line or via e-mail.

   If I can help you in any of the multi-tiered areas discussed in this message, I am always happy to provide a free one-hour consultation conference call to help you clarify your needs and directions on behalf of your students.

   As the leaves begin to turn into bright reds, oranges, and yellows . . . and, indeed, as they begin to fall, please accept my best wishes for a safe and productive two weeks . . . until next “we meet.”

Best,

Howie

Saturday, September 23, 2017

Hattie’s Meta-Analysis Madness: The Method is Missing !!! (Part III of III)



Why Hattie’s Research is a Starting-Point, but NOT the End-Game for Effective Schools

Dear Colleagues,

Introduction

   This three-part series is focusing on how states, districts, schools, and educational leaders make decisions regarding what services, supports, programs, curricula, instruction, strategies, and interventions to implement in their classrooms.  Recognizing that we need to use programs that have documented efficacy and the highest probability of implementation success, it has nonetheless been my experience that many programs are chosen “for all the wrong reasons”—to the detriment of students, staff, and schools.

Summarizing Part I of this Blog Series

   In Part I of this series (posted on August 26th), The Top Ten Ways that Educators Make Bad, Large-Scale Programmatic Decisions: The Hazards of ESEA/ESSA’s Freedom and Flexibility at the State and Local Levels [CLICK HERE], I noted that:

·       Beyond the policy-level requirements in the newly-implemented Elementary and Secondary Education/Every Student Succeeds Act (ESEA/ESSA), the Act transfers virtually all of the effective school and schooling decisions, procedures, and practices away from the U.S. Department of Education, and into the “hands” of the respective state departments of education and their state’s districts and schools.

·       Because of this “transfer of responsibility,” states, districts, and schools will be more responsible (and accountable) for selecting their own approaches to curriculum, instruction, assessment, intervention, and evaluation than ever before.

·       This will result in significant variability—across states and districts—in how they define school “success” and student progress, measure school and teacher effectiveness, apply assessments to track students’ standards-based knowledge and proficiency, and implement multi-tiered academic and behavioral services and interventions for students.

   All of this means that districts and schools will have more freedom—but  greater responsibility—to evaluate, select, and implement their own ways of functionally addressing all students’ academic and social, emotional, and behavioral learning and instructional needs—across a multi-tiered continuum that extends from core instruction to strategic response and intensive intervention.

   Part I of this series then described the “Top Ten” reasons why educational leaders make flawed large-scale, programmatic decisions—that waste time, money, and resources; and that frustrate and cause staff and student resistance and disengagement.

   The flawed Reasons discussed were:

1.   The Autocrat (I Know Best)
2.   The Daydream Believer (My Colleague Says It Works)
3.   The Connected One (It’s On-Line)
4.   The Bargain Basement Boss (If it’s Free, It’s for Me)
5.   The Consensus-Builder (But the Committee Recommended It)
6.   The Groupie (But a National Expert Recommended It)
7.   The Do-Gooder (It’s Developed by a Non-Profit)
8.   The Enabler (It’s Federally or State-Recommended)
9.   The Abdicator (It’s Federally or State-Mandated)
10.   The Mad Scientist (It’s Research-based)

   By self-reflecting on these flawed approaches, the hope is that educational leaders will avoid these hazards, and make their district- or school-wide programmatic decisions in more effective ways.
_ _ _ _ _

Summarizing Part II of this Blog Series

   In Part II of this series (posted on September 9th), “Scientifically based” versus “Evidence-based” versus “Research-based”—Oh my!!! Making Effective Programmatic Decisions: Why You Need to Know the History and Questions Behind these Terms [CLICK HERE], I noted that:

·       The term “scientifically basedappeared in ESEA/NCLB 2001 twenty-eight times, it was formally defined in the law, it appeared in IDEA 2004 (the current federal special education law), and it was (at that time) the “go-to” definition in federal education law when discussing how to evaluate the efficacy, for example, of research or programs that states, districts, and schools needed to implement as part of their school and schooling processes.

And yet, this term is found in ESEA/ESSA ONLY four times, and it appears to have been replaced by the term “evidence-based.”
_ _ _ _ _

·       The term “evidence-basedDID NOT APPEAR in either ESEA/NCLB 2001 or IDEA 2004, but it DOES appear in ESEA/ESSA 2015 sixty-three times—most often when describing “evidence-based research, technical assistance, professional development, programs, methods, instruction, or intervention.”

As the new “go-to” standard when determining whether programs or interventions have been empirically demonstrated as effective, ESEA/ESSA 2105 defines this term.

   [CLICK HERE for the ESEA/NCLB 2001 “scientifically based” and ESEA/ESSA 2015 “evidence-based” definitions in Part II of this Blog]
_ _ _ _ _

·       The term “research-basedappeared in five times in ESEA/NCLB 2001; it appears four times in IDEA 2004; and it appears once in ESEA/ESSA 2015.  When it appears, the term largely used to describe programs that need to be implemented by schools to support student learning.

Significantly, the term “researched-based” is NOT define in either ESEA law (2001, 2015), or by IDEA 2004.
_ _ _ _ _

   Part II of this series went on to recommend a series of questions that educational leaders should ask when told that a program, strategy, or intervention is scientifically based, evidence-based, or research-based.

   For example, I noted: If someone endorses a program as “scientifically based,” educational leaders should ask what the researcher or practitioner means by that term.  Then, the educational leader should ask for (preferably refereed) studies that “support” the program, and their descriptions of the:

   * Demographic backgrounds and other characteristics of the students participating in the studies (so you can compare and contrast these students to your students);

   * Research methods used in the studies (so you can validate that the methods were sound, objective, and that they involved control or comparison groups not receiving the program or intervention);

   * Outcomes measured and reported in the studies (so you can validate that the research was focused on student outcomes, and especially the student outcomes that you are most interested in for your students);

   * Data collection tools, instruments, or processes used in the studies (so that you are assured that they were psychometrically reliable, valid, and objectivesuch that the data collected and reported are demonstrated to be accurate

   * Treatment or implementation integrity methods and data reported in the studies (so you can objectively determine that the program or intervention was implemented as it was designed, and in ways that make sense);

   * Data analysis procedures used in the studies (so you can validate that the data-based outcomes reported were based on the “right” statistical and analytic approaches);

   * Interpretations and conclusions reported by the studies [so you can objectively validate that these summarizations are supported by the data reported, and have not been inaccurately- or over-interpreted by the author(s)]; and the

   * Limitations reported in the studies (so you understand the inherent weaknesses in the studies, and can assess whether these weaknesses affected the integrity of and conclusions—relative to the efficacy of the programs or interventions—drawn by the studies).
_ _ _ _ _

   The point of the questions and this discussion was to encourage educational leaders:

·       To go beyond “testimonials” and “hearsay” when programs, strategies, or interventions are recommended by others

·       To ask the questions and collect the information and data needed to objectively determine that a “recommended” program or intervention is independently responsible for the student outcomes that are purported and reported

·       To determine if there is enough objective data to demonstrate that the “recommended” program or intervention is appropriate for the educational leader’s own students, and if it will potentially result in the same positive and expected outcomes

·       To determine if the resources needed to implement the program are time- and cost-effective relative to the program’s “return-on-investment”

·       To determine if the “recommended” program or intervention will be acceptable to those involved (e.g., students, staff, administrators, parents) such that they are motivated to implement it with integrity and over an extended period of time
_ _ _ _ _ _ _ _ _ _

Today’s Discussion:  John Hattie and Meta-Analyses

   Professor John Hattie has been the Director of the Melbourne Educational Research Institute at the University of Melbourne, Australia, since March 2011.  His research interests include performance indicators, models of measurement, and the evaluation of teaching and learning. He is best known for his books Visible Learning (2009) and Visible Learning for Teachers (2012).

   Anchoring these books is Hattie’s critical review of thousands of published research studies in six areas that contribute to student learning: student factors, home factors, school factors, curricular factors, teacher factors, and teaching and learning factors.  Using those studies that met his criteria for inclusion, Hattie pooled the effect sizes from these individual studies, conducted different series of meta-analyses, and rank ordered the positive to negative effects of over a hundred approaches—again, related to student learning outcomes.

   In Visible Learning, for example, Hattie described 138 rank ordered influences on student learning and achievement based on a synthesis of more than 800 meta-studies covering more than 80 million students.  In his subsequent research, the list of effects was expanded (in Visible Learning for Teachers), and now (2016), the list—based on more than 1,200 meta-studies—includes 195 effects and six “super-factors.”  All of this research reflects one of the largest integrations of “what works best in education” available.
_ _ _ _ _

What is a Meta-Analysis?

   A meta-analysis is a statistical procedure that combines the effect sizes from separate studies that have investigated common programs, strategies, or interventions.  The procedure results in a pooled effect size that provides a more reliable and valid “picture” of the program or intervention’s usefulness or impact, because it involves more subjects, more implementation trials and sites, and (usually) more geographic and demographic diversity.  Typically, an effect size of 0.40 is used as the “cut-score” where effect sizes above 0.40 reflect a “meaningful” impact.

   Significantly, when the impact (or effect) of a “treatment” is consistent across separate studies, a meta-analysis can be used to identify the common effect.  When effect sizes differ across studies, a meta-analysis can be used to identify the reason for this variability.
_ _ _ _ _

   Meta-analytic research typically follows some common steps.  These involve:

·       Identifying the program, strategy, or intervention to be studied
·       Completing a literature search of relevant research studies
·       Deciding on the selection criteria that will be used to include an individual study’s empirical results
·       Pulling out the relevant data from each study, and running the statistical analyses
·       Reporting and interpreting the meta-analytic results

   As with all research, there are a number of subjective decisions embedded in meta-analytic research, and thus, there are good and bad meta-analytic studies.

   Indeed, as emphasized throughout this three-part series, educational leaders cannot assume that “all research is good because it is published,” and they cannot assume that even “good” meta-analytic research is applicable to their communities, schools, staff, and students.

   And so, educational leaders need to independently evaluate the results of any reported meta-analytic research—including research discussed by Hattie—before accepting the results.

   Among the questions that leaders should ask when reviewing (or when told about the results from) meta-analytic studies are the following:

·       Do the programs, strategies, or interventions chosen for investigation use similar implementation steps or protocols?

   In many past Blogs, I have discussed the fact that the Positive Behavioral Interventions and Supports (PBIS) framework advocated by the U.S. Department of Education’s Office of Special Education Programs (and its funded national Technical Assistance centers) is a collection of different activities that, based on numerous program evaluations, different schools implement in different degrees (or not at all) in different ways.

   Given this, a meta-analysis of many separate PBIS research studies might conclude that the “PBIS framework contributes to student learning,” but the educational consumer has no idea which PBIS activities contributed to this result, nor how to functionally implement these different activities.

   In addition to this issue, some researchers warn of an “agenda bias” that occurs when researchers choose specific areas to investigate based on wanting a personally- or politically-motivated conclusion.  Often, this bias tends to affect (or continue to bias) many other research-related procedures (see the other question areas below)—resulting in questionable or invalid results.
_ _ _ _ _

Next Question:

·       Are the variables investigated, by a meta-analytic study, variables that are causally- versus correlationally-related to student learning, and can they be taught to a parent, teacher, or administrator?

   Educational leaders need to continually differentiate research (including meta-analytic research) that reports causal factors versus correlational factors.  Obviously, causal factors directly affect student learning, while correlational factors contribute to or predict student learning.

   Similarly, they need to recognize that some meta-analytic results involve factors (e.g., poverty, race, the presence of a significant disability or home condition) that cannot be changed, taught, or modified. 

   Moreover . . . once you read and understand Hattie’s functional definitions for the terms that he uses to summarize his meta-analyses, you realize that three out of six of his “Super Factors” cannot be changed by any teacher or classroom intervention (Teacher Estimates of Achievement, Self-Reported Grades, and Piagetian Levels).

   Many of the approaches that Hattie rates as having the strongest effects on student learning contribute to (but do not cause) these student outcomes. 

   For example, while effective classroom instruction and behavior management contribute to student learning, mastery, and proficiency, some students learn a great deal even when their teachers are not instructionally effective or their classrooms are not consistently well-managed.  Thus, these factors correlationally contribute to student learning, but they do not cause student learning.

   This crucial point is not intended to invalidate Hattie’s meta-analytic results (or anyone else’s).  It simply is to say that educational leaders need to determine the “meaningfulness” of meta-analytic results, while also putting them into their “implementation context.”

   Continuing on:  while some studies differentially and concurrently investigate multiple programs, strategies, or interventions, these individual studies are few and far between. 

   Thus, educational leaders cannot simply take the “Top Ten” Hattie approaches on his list, implement them, and assume that student learning results will increase.  This is because most of these “Top Ten” approaches have never been studied together, and they might not be applicable to their students or instructional conditions.

   Relatedly, educational leaders need to be wary of “Hattie consultants” who believe that they can synthesize all of the independent meta-analyses of different programs, strategies, or interventions conducted by Hattie into a meaningful implementation plan and process for their school or students.

   Critically. . . Hattie has provided a useful “road map” to success. . . but remember, there are “many roads to Rome.”
_ _ _ _ _

Next Question:

·       In conducting the literature review, did the researchers consider (and control for) the potential of a “publication bias?”

   One of the realities of published research is that journals most-often publish research that demonstrates significant effects.  Thus, a specific program or intervention may have ten published articles that showed a positive effect, and fifty other well-designed studies that showed no or negative effects.  As the latter unpublished studies are not available (or even known by the researcher), they will not be included in the meta-analysis.  And so, while the meta-analysis may show a positive effect for a specific program, this outcome may not reflect its “actual” negative to neutral impact.

   There are research methods and tests (e.g., using funnel plots, the Tandem Method, the Egger’s regression test) to analyze the presence of publication bias, and to decrease the potential of false positive conclusion.  These, however, are beyond the scope of this Blog.

   Suffice it to say that some statisticians have suggest that 25% of meta-analyses in the psychological sciences may have inherent publication biases.

  What should educational leaders do? Beyond their own self-study of the meta-analytic research—including the individual research studies involved—that appears to support a specific program, strategy, or intervention, educational leaders need to:

·       Identify the short- and long-term “success indicators” of these programs specifically for their schools or with their students;

·       Conduct pilot tests before scaling up to whole-school or system-wide implementation;

·       Identify and use sensitive formative evaluation approaches that detect—as quickly as possible—programs that are not working; and

·       Maintain an “objective, data-driven perspective” regardless of how much they want to program to succeed.

   In other words, educational leaders need to revalidate any selected program, strategy, or intervention when implemented with their schools, staff, and/or students—regardless of that program’s previous validation (which, once again, may be due to publication bias).
_ _ _ _ _

Next Question:

·       What were the selection criteria used by the author of the meta-analysis to determine which individual studies would be included in the analysis, and were these criteria reliably and validly applied?

   This is an important area of potential bias in meta-analytic research.  It occurs when researchers, whether consciously or not, choose biased selection criteria.  For example, they may favor large-participant studies over single subject studies, or randomized controlled studies versus qualitative studies.

   This selection bias also occurs when researchers do not reliably and/or validly apply their own sound selection criteria.  That is, they may include certain studies that objectively don’t “qualify” for their analysis, or exclude other studies that meet all of the criteria.

   Regardless, selection biases influence the individual studies included (or not included) in a meta-analysis, and this may skew the results.  Critically, the “skew” could be in any direction.  That is, the analysis might incorrectly result in negative, neutral, or positive results.

   This issue is even more compounded as Hattie included numerous meta-analyses conducted by other researchers in some of his meta-analyses.  Thus, he might have pooled other authors’ selection biases with his own selection biases to create some “Super Biases” (for example) within his “Super Factors.”

   While I know that many educational leaders, at this point in our “conversation,” are probably wondering (maybe, in frustration), “Why can’t I just ‘trust the experts?’” or “How do I do all of this?” 

   And I do feel your pain...

   But the “short answer” to the first question (as noted in the earlier two Blogs in this series) is that “blind trust” may result in adopting a program that does not succeed; that wastes a great deal of time, training, money, and materials; and that undermines student success and staff confidence.

   The “short answer” to the second question is that these questions should be posed to the researcher or the person who is advocating a “meta-analytically-proven” program.  Let them show you the studies and reveal the drilled-down data that is (presumably) at the foundation of their recommendation.

   But. . . in addition. . . please recognize that many school districts have well-qualified professionals (either in-house, at a nearby university, in the community/region, or virtually on-line) with the research and analysis background to “vet and validate” programs, strategies, and interventions of interest. 

   Use these resources. 

   The “front-end” time in well-evaluating a program will virtually always save enormous amounts of “back-end” time when an ineffectively researched or chosen program is actually implemented.
_ _ _ _ _

Next Question:

·       Were the best statistical methods used in the meta-analysis?  Did one or two large-scale or large-effect studies outweigh the results of other small-scale, small-participant studies that also were included?  Did the researcher’s conclusions match the actual statistical results from the meta-analysis?

   I’m not going to answer these questions in detail. . . as we’re now teetering on methodologically complex (but important) areas.  [If you want to discuss these with me privately, give me a call.]

   My ultimate point here is that—as with any research study—we need to know that the meta-analytic research results and interpretations for any program, strategy, or intervention are sound.

   As immediately above, educational leaders need to invest in “high probability of success” programs.  Anything less is irresponsible.
_ _ _ _ _

But There’s More:  The Method is Missing

   But. . . there IS more . . . even when the meta-analytic research is sound.

   As alluded to above. . . just because we know that a program, strategy, or intervention significantly impacts student learning, we do not necessarily know the implementation steps that were in the research studies used to calculate the significant effect . . . and we cannot assume that all or most of the studies used the same implementation steps. 

   To get to the point where we know exactly what implementation steps to replicate and functionally use in our schools and with our staff and students (to get the benefit of a particular effect), we (again) need to “research the research.”

   Case in point.  Below are Hattie’s current “Top Twenty” approaches that have the strongest effects on student learning and achievement:

   Teacher estimates of achievement
   Collective teacher efficacy
   Self-Reported Grades
   Piagetian Programs
   Conceptual change programs
   Response to intervention
   Teacher credibility
   Micro teaching
   Cognitive task analysis
   Classroom discussion
   Interventions for LD
   Teacher clarity
   Reciprocal teaching
   Feedback
   Providing formative evaluations
   Acceleration
   Creativity programs
   Self-questioning
   Concept mapping
   Problem solving teaching
   Classroom behavior

   After reviewing these. . . OK . . . I’ll admit it.  As a reasonably experienced school psychologist, I have no idea what that vast majority of these approaches are at a functional level. . . much less what implementation steps to recommend.

   To begin to figure it out, I would first go back to Hattie, and look at a Glossary (for example, from Visible Learning for Teachers, 2012) that explains the research reflected in the effect sizes for the approaches he has rank-ordered.


Example 1:  Self-Reported Grades

   One of Hattie’s Super Factors, “Self-Reported Grades.”  For this effect, the Glossary linked above provides the following information:

Self-reported grades are at the top of all influences. Children are the most accurate when predicting how they will perform. Hattie explains that if he could write his book Visible Learning for Teachers again, he would re-name this learning strategy, “Student Expectations” to express more clearly that this strategy involves the teacher finding out what are the student’s expectations, and pushing the learner to exceed these expectations. Once a student has performed at a level that is beyond their own expectations, he or she gains confidence in his or her learning ability. 

Example for Self-reported grades: Before an exam, ask your class to write down what mark the student expects to achieve. Use this information to engage the student to try to perform even better.

Hattie cites five meta-studies for this effect:

Mabe/West (1982): Validity of self-evaluation of ability

Fachikov/Boud (1989): Student Self-Assessment in Higher Education

Ross (1998): Self-assessment in second language testing

Falchikov/Goldfinch (2000): Student Peer Assessment in Higher Education

Kuncel/Crede/Thomas (2005): The Validity of Self-Reported Grade Point Averages, Class Ranks, and Test Scores


   As noted earlier in this Blog, and as defined here, a student’s Self-Reported Grades cannot be changed by having a classroom teacher “do an intervention.”  If students’ beliefs about their prospective grades are inaccurate, their teachers might be able to provide them with more data or feedback and, thus, change their accuracy.  But what happens if students accurately state that they are going to fail a test . . . and they do?

   How will that change their motivation or proficiency in the future?

   Or, what if students underestimate their grades on a test, and perform better than expected?  How will this necessarily improve these students’ motivation such that they master more material in the future?  Perhaps the underestimate and then the “better-than-expected grades” will lull these students into believing that they are “doing enough” to get good grades . . . they just didn’t realize it before?

   Herein lies the danger. 

   In order to use Hattie’s results, we need to know his definition of Self-Reported Grades, the research that was integrated into the meta-analysis, whether the variable can be externally influenced (e.g., through a teacher’s intercession or intervention), and then the explicit, scientifically-based methodology needed to effect the change.

   None of these conditions are immediately or functionally apparent from a rank-ordered list of meta-analytic effect sizes.

   And, there is no single consultant or “anointed” group of consultants who “hold the keys” to operationalizing Hattie’s statistics into student success.
_ _ _ _ _

   But let’s take two more Hattie factors/approaches to further demonstrate that “The Method is Missing.”

Response to Intervention and Comprehensive Interventions for Learning Disabled Students

   Response to Intervention, once again, is one of Hattie’s Super Factors. 

   The Glossary defines Response to Intervention as “an educational approach that provides early, systematic assistance to children who are struggling in one or many areas of their learning. RTI seeks to prevent academic failure through early intervention and frequent progress measurement.”  In Visible Learning for Teachers, Hattie devotes one paragraph to Response to Intervention—citing seven generic “principles.”

   Hattie’s meta-analysis of the research that he categorized as “Comprehensive Interventions for Learning Disabled Students” resulted in one of the five top effect sizes relative to impacting student learning and achievement. 

   In the cited Glossary, it was noted that:

The presence of learning disability can make learning to read, write, and do math especially challenging. Hattie admits that “it would be possible to have a whole book on the effects of various interventions for students with learning disabilities” (Hattie 2009), and he references a 1999 meta-study.

To improve achievement teachers must provide students with tools and strategies to organize themselves as well as new material; techniques to use while reading, writing, and doing math; and systematic steps to follow when working through a learning task or reflecting upon their own learning. Hattie also discusses studies that found that “all children benefited from strategy training; both those with and those without intellectual disabilities.”


   Once again—for BOTH of these approaches, there is no specificity.  Moreover, NO ONE reading Hattie’s books would have a clue as to where to begin the implementation process for either.

   More specifically:  Response to Intervention is not a single, replicable intervention. 

   Many different researchers have defined it, its components, its implementation processes, and its applicability (for example, to literacy, math, language arts, behavior) in many different ways.

   And so. . . from Hattie’s research, one would conclude that this is a worthwhile area to research when students are academically struggling or presenting with challenging behavior.  But, one would have to analyze the specific research for their area of student concern.

   More specifically:  Hattie describes “Comprehensive Interventions for Learning Disabled Students” in the plural.

   And so. . . from Hattie’s research, which learning disabilities are did his meta-analytic studies address?  What were the interventions?  At what age and level of severity did the interventions work with students?  And, how was “success” defined and measured?

   As Hattie himself noted. . . he could write a book just in this area (and some esteemed educators have).

   But once again, while it is important to know that some interventions for learning disabled students work, one would have to answer the questions immediately above, know the research-to-practice in a specific area of disability, and have the consultation skills to help teachers implement these interventions “in real time.”
_ _ _ _ _ _ _ _ _ _

Conclusions

   I want to make it clear that this Blog is NOT questioning Hattie’s research in any way. 

   Hattie has made many astounding contributions to our understanding of the research in areas that impact student learning and the school and schooling process.

   However, consistent with the theme of the three Blogs in this series, I AM expressing concerns—and, hopefully, providing good guidance—as to how educational leaders need to analyze, understand, use, and make systems-level decisions based on school and psychoeducational research. . . research that varies in both quality and utility.

   As noted numerous times across the three Blogs:  I fully understand how challenging it is for districts and schools to analyze the research related to the empirical efficacy of a specific program, strategy, or intervention.  I also recognize—as a practitioner who works in the schools—their limited time and more limited resources. 

   And I agree that districts and schools should be able to trust the “national experts”—from their national associations, to their departments of education, to their published journals—in this regard.

   But testimonials do not qualify as research, and—unfortunately—some “research” is published in the absence of impartiality.

   We need to be careful.

   Districts and schools need to selectively do their own due diligence. . . or at least consult with professionals who can provide objective, independent evaluations of the curricula, programs, or interventions being considered for student, staff, and school implementation. 

   Hopefully, the narrative in these three Blogs will provide educational leaders with the information and questions that need to be asked. . . providing an assist in the due diligence process.


   In the end, schools and districts should not invest time, money, professional development, supervision, or other resources in programs that have not been fully validated for use with their students and/or staff. 

   Such investments are not fair to anyone—especially when they become counterproductive by (a) not delivering the needed results, (b) leaving students further behind, and/or (c) creating staff resistance to “the next program”—which might, parenthetically, be the “right” program.
_ _ _ _ _

   I hope that this discussion has been useful to you.

   As always, I look forward to your comments. . . whether on-line or via e-mail.

   I hope that your school year continues to be successful.  We are still thinking about those in the greater Houston area and across Florida. . . and now, in Puerto Rico and across the Caribbean.

   If I can help you in any of the areas discussed during this Blog series, I am always happy to provide a free one-hour consultation conference call to help you clarify your needs and directions on behalf of your students, staff/colleagues, school(s), and district.

Best,

Howie