Showing posts with label Meta-analysis effectiveness. Show all posts
Showing posts with label Meta-analysis effectiveness. Show all posts

Saturday, April 13, 2019

How Hattie’s Research Helps (and Doesn’t Help) Improve Student Achievement


Hattie Discusses What to Consider, Not How to Implement It . . . More Criticisms, Critiques, and Contexts

[CLICK HERE for the full Blog message]

Dear Colleagues,

Introduction

   By the time you read this Blog, I will have just landed in Singapore where I am one of six presenters at the World EduLead 2019 Conference [CLICK HERE] sponsored by the International Association for Scholastic Excellence (INTASE).

   During my week here, I will be presenting two full-day Master Classes, two Keynotes, and a Symposium with Michael Fullan (School Leadership), Carol Ann Tomlinson (Differentiated Instruction), and three other international education greats.

   Altogether, I will be presenting the following:

  • Seven Evidence-Based Strategies to Systemic Success in Schools
  • The Seven C’s of Success: Strengthening Staff Relationships to Ensure Student Success

  • School Reform: Strategic Planning, Shared Leadership, and Student Success
  • Helping Hattie Work: Translating Meta-Analysis into Meaningful Student Learning Outcomes

   While re-researching John Hattie’s work for last full-day presentation, I uncovered new “criticisms, critiques, and contexts” that motivated me to update at least two past Hattie Blog messages with this new one.

   In this Blog, then, we will describe the concerns in detail, and then discuss examples of how Hattie’s work can be effectively and defensibly—from a science-to-practice perspective—for students, by staff, and in schools.

   To accomplish this, the full Blog message will (a) briefly overview the concerns; (b) present a primer on meta-analysis; (c) quote from the concerns of three notable researchers; (d) discuss how to go from “effect to effective practice;” and (e) describe the questions to ask the “outside” Hattie consultant— before you hire him or her.

[CLICK HERE for the full Blog message]
_ _ _ _ _ _ _ _ _ _

A Brief Overview of Concerns with Hattie’s Research 

   Over the past decade especially, John Hattie has become internationally-known for his meta-meta-analytic research into the variables that most-predict students’ academic achievement.  Indeed, some view his different Visible Learning books (which have now generated a “Hattie-explosion” of presentations, workshops, institutes, and “certified” Hattie consultants) as the books of an educational “Bible” that shows educators “the way” to succeed with students.

   As such, Hattie has assumed a “rock star” status. . . which creates an illusion that his work is “untouchable,” that it cannot be critiqued, and that it certainly can’t be wrong.

   As of this writing, Hattie’s research is based on the synthesis of over 1,500 meta-analyses comprising more than 90,000 studies involving more than 300 million students around the world.  In more statistical terms, Hattie takes others’ published meta-analyses—investigating, for example, a specific educational approach (e.g., cooperative learning) or intervention (e.g., Reading Recovery), and he pools them together—statistically conducting a meta-meta-analysis.

   In doing this, he averages the effect sizes from many other meta-analyses that themselves have pooled research that investigated—once again—the effect of one psychoeducational variable, strategy, intervention, or approach on student achievement.
_ _ _ _ _

   While the magnitude and sheer effort of what Hattie has done is impressive. . . there are a number of major methodological problems with his statistical approaches and interpretations; and a number of additional major science-to-practice implementation problems. 

   To foreshadow the more comprehensive discussion later in this Blog, below is an example of one of his primary methodological problems, and one of his primary implementation problems.

   Educators need to fully understand these problems in order to be able to benefit— especially on behalf of their students—from this research.
_ _ _ _ _

An Example of a Methodological Problem in Hattie’s Research

  One major methodological problem is that Hattie’s statistical analyses may be flawed.  

   More specifically, a number of notable statisticians (see the section on this below) have questioned whether the effect sizes from different independent meta-analyses can be averaged and pooled into a single meta-meta-analytical effect size—which is exactly what Hattie is doing.

   As such, they don’t believe that the statistical approach used by Hattie in his research is defensible. . . which means that some of his research results may be incorrect.

   Metaphorically, what Hattie is doing is akin to averaging the average temperatures for 100 years of each day in March. . . and then saying that the 100-year average temperature for March in, say, Washington, D.C. is 48 degrees (it actually is—I looked this up).

   While you can statistically calculate this, the conclusion—regarding the 48 degree average temperature—may not be functionally accurate or, more importantly, meaningful (if you are planning a trip to DC). 

   First of all, in a typical year, Washington, D.C.’s March temperature may range from 37 degrees on one day to 59 degrees on another day—a variance of 22 degrees.  So, even in looking at one year’s worth of March temperatures, you need to statistically address the temperature range during any specific month. . . and then you need to look at this variability over 100 years. 

   Given all of this, the 48 degree 100-year average clearly does not accurately tell the entire story.

   The “single” temperature is compounded by the fact that there may be different “micro-climates” in Washington, D.C.  Thus, the daily temperature on any one March 15th, for example, may be 45 degrees in the Northwest part of the city, but 52 degrees in the Southeast part.

   Finally, from year to year. . . over 100 years. . . there may be some seasons that are colder or warmer than others.  Not to get political, but if we were to factor in the impact of Global Warming, it may be that the most-recent 10-year March temperature is significantly warmer than the average temperatures for the 90 years before. . . and, therefore, more accurate and meaningful for our current needs.
_ _ _ _ _

   There is, at least, one additional embedded issue.  Measuring temperature is scientifically far more reliable and valid than the diverse measures used in different studies (or at different times in a school) to measure student achievement.  A temperature is measured by a thermometer, and most thermometers will give basically the same reading because they are scientifically calibrated instruments.

   With the meta-analyses used by Hattie, different researchers operationalize “student achievement” (as an independent outcome measure) in different ways.  Even if a bunch of them operationalize student achievement the same way, they still may use different measurement tools or metrics. . . that provide significantly different results. 

   Thus, the measurement of achievement is going to have far more variability from Hattie study to study than a thermometer in Washington, D.C. in March.
_ _ _ _ _

An Example of an Implementation Problem in Hattie’s Research

  The one major implementation problem that we will discuss right now is that, in a specific effect size area, educators need to know the implementation methods that were used in all of the studies included in the original meta-analytic studies that Hattie pooled into his meta-meta-analyses.  

   The point here is that, unless a program or intervention has been standardized in a specific effect area, and the same program or same intervention implementation steps were used in every study included in a meta-analysis or Hattie’s meta-meta-analyses in that area, it is possible that one implementation approach contributed more to the positive effect size on student achievement than another approach.

   For example, given Hattie’s current data, “cognitive task analysis” has a 1.29 effect size relative to positively impacting student achievement.  It is unlikely, however, that every study in every meta-analysis pooled by Hattie used the same step-by-step implementation process representing “cognitive task analysis.”

   Thus, Hattie’s research tells us what to consider (i.e., cognitive task analysis), but not necessarily the specific research-validated steps in how to implement it.

   For an individual school to implement the cognitive task analysis approach or steps that contributed most to the positive effect size that Hattie reports, its leaders need to know—statistically and relative to their implementation steps—what individual studies were integrated into the meta-analyses and Hattie’s meta-meta-analysis.

   But they also need to know which studies were done with the same type of students (e.g., gender, socio-economic status, race, geographical location, type and quality of school, etc.) that they are currently teaching in their school.

   That is, it may be that the students involved in the meta-analytic studies used by Hattie do not match the students in the schools that we are working with.  Thus, while the research used by Hattie may be “good” research (for some students in some schools in some communities), it may not be the “right” research for our students, schools, and community.

   To summarize so far:  If schools are going to use Hattie’s research in the most effective way for their specific students, a Multiple-gating process of decision-making must be used.

   This Multiple-gating Process should include:

  • Step 1.  Identify your school’s history and status, resources and capacity, and current positive and needed outcome relative to student achievement.
  • Step 2.  Determine which Hattie variables will most improve student achievement—with a constant awareness that many of these variables will interact or are interdependent.
  • Step 3.  Evaluate the methodological and statistical quality and integrity of the meta-analytic studies that Hattie included in his meta-meta-analyses.
NOTE:  If Hattie’s meta-meta-analysis has flaws or included flawed meta-analytic studies, identify the best separate meta-analysis studies and continue this multiple-gating process.
  •  Step 4.  Evaluate the demographics and other background characteristics of the schools, staff, and students involved in the meta-analytic studies used by Hattie in his meta-meta-analyses to validate that they match the school demographics and background characteristics where you plan to implement the program, strategy, or intervention.
  • Step 5.  Using and analyzing Hattie’ best meta-meta-analytic study (or the best individual meta-analysis studies—as immediately above), identify what program(s) or strategy(ies), and what specific implementation approaches and steps were most responsible for the positive effects on student achievement.
  • Step 6.  Finalize the select of your program or strategy, and its implementation approaches and steps, and develop an Implementation Action Plan that identifies who will be involved in implementation, what training and resources they need, how you will engage the students (staff, and parents), how you will evaluate the short-and long-term student achievement outcomes, and what will be the implementation steps and timelines.
  • Step 7.  Resource, train, engage, implement, evaluate, fine-tune, implement, and evaluate.
_ _ _ _ _

   As we proceed to the next section of this Blog, let me be clear.  This Blog was not written to criticize or denigrate, in any way, Hattie on a personal or professional level.  He is a prolific researcher and writer, and his work is quite impressive.

   However, the full Blog message will critique the statistical and methodological underpinnings of meta- and meta-meta-analytic research, and discuss its strengths and limitations.  But most essentially, the focus ultimately will be on delineating the research-to-practice implications of Hattie’s work, and how to implementation it with students in the most effective and efficient ways.
_ _ _ _ _

   To this end, and once again, it is important that educators understand:
  • The strengths and limitations of meta-analytic research—much less meta-meta-analytic research;
  • What conclusions can be drawn from the results of sound meta-analytic research;
  • How to transfer sound meta-analytic research into actual school- and classroom-based instruction or practice; and
  • How to decide if an effective practice in one school, classroom, or teacher is “right” for your school, classrooms, and teachers.
[CLICK HERE for the full Blog message]

   While this all provides a “working outline,” let’s look at some more details.
_ _ _ _ _ _ _ _ _

A Primer on Meta-Analysis

What is it?

   A meta-analysis is a statistical procedure that combines the effect sizes from separate studies that have investigated common programs, strategies, or interventions.  The procedure results in a pooled effect size that provides a more reliable and valid “picture” of the program or intervention’s usefulness or impact because it involves more subjects, more implementation trials and sites, and (usually) more geographic and demographic diversity.  Typically, an effect size of 0.40 is used as the “cut-score” where effect sizes above 0.40 reflect a “meaningful” impact.

   Significantly, when the impact (or effect) of a “treatment” is consistent across separate studies, a meta-analysis can be used to identify the common effect.  When effect sizes differ across studies, a meta-analysis can be used to identify the reason for this variability.
_ _ _ _ _

How it is done?

   Meta-analytic research typically follows some common steps.  These involve:
  • Identifying the program, strategy, or intervention to be studied
  • Completing a literature search of relevant research studies
  • Deciding on the selection criteria that will be used to include an individual study’s empirical results
  • Pulling out the relevant data from each study, and running the statistical analyses
  • Reporting and interpreting the meta-analytic results
   As with all research, and as reflected in the steps above, there are a number of subjective decisions that those completing a meta-analytic study must make.  And, these decisions could be sound, or they could be not so sound.  They could be defensible, or they could be arbitrary and capricious.  They could be well-meaning, or they could be biased or self-serving. 

   Thus, there are good and bad meta-analytic studies.  And, educators are depending on the authors of each meta-analytic study (or, perhaps the journal reviewers who are accepting the study for publication) to include only those studies that are sound.

   By extension, educators also are depending on Hattie to include only those well-designed and well-executed meta-analytic studies in his meta-meta-analyses.

   But, unfortunately, this may not be the case.

   In his 2009 Visible Learning book, Hattie states (pg. 11), “There is. . . no reason to throw out studies automatically because of lower quality.”

   This suggests that Hattie may have included some lower quality meta-analytic studies in some (which ones?) of his many meta-meta-analyses.

   Indeed. . . What criteria did he use to when including some lesser-quality meta-analytic studies?  How did he rationalize including even one lower quality study?  But—most importantly—how did these lower quality studies impact the results of the effect sizes and functional implications of the research?

   These are all important questions that speak directly to the educators who are trying to decide which Hattie-endorsed approaches to use in their pursuit of improved student achievement scores.  These questions similarly relate to educators’ decisions on how to effectively implement the approaches that they choose.
_ _ _ _ _

How do you Interpret an Effect Size?

   As noted above, Hattie (and other researchers) use an effect size of 0.40 as the “cut-score” or “hinge point” where a service, support, strategy, program, or intervention has a “meaningful” impact on student achievement.

   Visually, Hattie represents the continuum of effect sizes as a “Barometer” (see below).


   But this doesn’t tell the entire story.  In fact, some researchers are very uncomfortable with this barometer and how Hattie characterizes some of the effect sizes along the continuum.
_ _ _ _ _

   Matthew A. Kraft, from Brown University, is one such researcher.  In his December, 2018 working paper, Interpreting Effect Sizes of Education Interventions, Brown identified five guidelines when interpreting effect sizes in education.

[CLICK HERE for this paper]

   Kraft’s five guidelines are cited below.  For a detailed discussion of each—with their implications and practical examples, go to the complete Blog message.

[CLICK HERE for the full Blog message]
  • Guideline #1.  The results from correlational studies, when presented as effect sizes, are not causal effects.  Moreover, effect sizes from descriptive and correlational studies are often larger than causal studies.
  •  Guideline #2.  The magnitude of effect sizes depends on what outcomes are evaluated and when these outcomes are measured.
  •  Guideline #3.  Effect sizes are impacted by subjective decisions researchers make about the study design and analyses.
  •  Guideline #4.  Strong program or intervention effect sizes must be covaried with how much it costs to implement the program or intervention—both relative to the initial start-up and ongoing maintenance.
  • Guideline #5.  The ease or difficulty in scaling-up a program or intervention also matters when evaluating the policy relevance of effect sizes.
_ _ _ _ _ _ _ _ _ _

Others’ Concerns with Hattie’s Research 

   To fully consider the concerns with Hattie’s research, it important to include two additional voices.

   In a past Blog, we discussed the concerns of Dr. Robert Slavin from John’s Hopkins University.  These concerns are summarized in the full Blog message.

   In addition, we add the perspectives of Drs. Pierre-Jerome Bergeron and Lysanne Rivard (from the University of Ottawa and McGill University, respectively) who wrote a 2017 article in the McGill Journal of Education titled, “How to Engage in Pseudoscience with Real Data: A Criticism of John Hattie’s Arguments in Visible Learning from the Perspective of a Statistician.”

   In their article, they make the following points:
  • Hattie’s meta-meta-analyses ignore the presence of negative probabilities; He confounds correlation and causality.
  • Hattie believes that Effect Sizes from separate meta-analytic studies can be compared because Cohen’s d is a measure without a unit/metric; his averages, therefore, do not make sense.
  •  In conducting meta-meta-analyses, Hattie is comparing Before Treatment versus After Treatment results, not (as in the original meta-analyses he uses) Treatment versus Control Group results.
  • Hattie pools studies that have different definitions (and measurements of) student achievement, and treats them as one and the same.
  • Hattie believes that effects below zero are bad. Between 0 and 0.4 we go from “developmental” effects to “teacher” effects. Above 0.4 represents the desired effect zone. There is no justification for this classification.
[CLICK HERE for the full Blog message with more details and quotes from Slavin, Bergeron, and Rivard]
_ _ _ _ _ _ _ _ _ _

How Do You Go from Effect to Effective Practice?

   In the most-current (October, 2018) version of Hattie’s Visible Learning effect sizes, Hattie has organized more than 250 variables into clusters that include: Student, Curricula, Home, School, Classroom, Teacher, and Teaching. 

   In the Figure below, I have listed the top eight effect sizes with their respective “Areas of Research Focus.”

   I have also added a descriptor identifying whether each variable can be changed through an external intervention.  Thus, I am saying that “Students’ Self-Reported Grades,” “Teacher Estimates of Student Achievement,” and a “Teacher’s Credibility with his/her Students” cannot be changed in a sustained way through some type of intervention, and that—even if they could—they would not causally change student achievement.

   Parenthetically, in most cases, these three variables were independent variables in the research investigated by Hattie.


   At this point, we need to discuss how to go from “effect to effective practice.”  To do this, we need to understand exactly what each of the variables in the Figure actually are.
  
   And . . . OK . . . I’ll admit it. 

   As a reasonably experienced school psychologist, I have no idea what that vast majority of these approaches actually involve at a functional school, classroom, teacher, or student level. . . much less what methods and implementation steps to use.

   To begin to figure this out, we would need to take the following research-to-practice steps:
  • Go back to Hattie’s original works and look at his glossaries that define each of these terms
  •  Analyze the quality of each Hattie meta-meta-analysis in each area
  • Find and analyze each respective meta-analysis within each meta-meta-analysis
  • Find and evaluate the studies included in each meta-analysis, and determine which school-based implementation methods (among the variety of methods included in each meta-analysis) are the most effective or “best” methods— relative to student outcomes
  • Translate these methods into actionable steps, while also identifying the provide the professional development and support needed for sound implementation
  • Implement and evaluate the short- and long-term results
   If we don’t do this, our districts and schools will be unable to select the best approaches to enhance their student achievement and implement these approaches in the most effective and efficient ways?

   This, I believe, is what the researchers are not talking about.
_ _ _ _ _

The Method is Missing

   To demonstrate the research-to-practice points immediately above, the full Blog message analyzes two high-effect-size approaches on Hattie’s list:
  • Response to Intervention (Effect Size: 1.09)
  • Interventions for Students with Learning Needs (Effect Size: 0.77)
[CLICK HERE for the full Blog message]
_ _ _ _ _ _ _ _ _ _

The Questions to Ask the Outside “Hattie Consultants”

   In order for districts and schools to know exactly what implementation steps are needed to implement effective “Hattie-driven” practices so that their students can benefit from a particular effect, we need to “research the research.”

   And yet, the vast majority of districts—much less schools—have the personnel with the time and skills to do this.

   To fill this gap:  We now have a “cottage industry” of “official and unofficial” Hattie consultants who are available to assist.

   But how do districts and schools evaluate these consultants relative to their ability, experience, and skills to deliver effective services?

   With no disrespect intended, just because someone has been trained by Hattie, has heard Hattie, or has read Hattie—that does not give them the expertise, across all of the 250+ rank-ordered influences on student learning and achievement, to analyze and implement any of the approaches identified through Hattie’s research.

   And so, districts and schools need to ask a series of specific questions when consultants say that their consultation is guided by Hattie’s research.

   Among the initial set of questions are the following:

1.   What training and experience do you have in evaluating psychoeducational research as applied to schools, teaching staff, and students—including students have significant academic and/or social, emotional, or behavioral challenges?

2.   In what different kinds of schools (e.g., settings, grade levels, socio-economic status, level of ESEA success, etc.) have you consulted, for how long, in what capacity, with what documented school and student outcomes—and how does this experience predict your consultative success in my school or district?

3.   When guided by Hattie’s (and others’) research, what objective, research-based processes or decisions will you use to determine which approaches our district or school needs, and how will you determine the implementation steps and sequences when helping us to apply the selected approaches?

4.   What will happen if our district or school needs an approach that you have no experience or expertise with?

5.   How do you evaluate the effectiveness of your consultation services, and how will you evaluate the short- and long-term impact of the strategies and approaches that you recommend be implemented in our district or school?
_ _ _ _ _ _ _ _ _ _

Summary

   Once again, none of the points expressed in this Blog are personally about John Hattie.  Hattie has made many astounding contributions to our understanding of the research in areas that impact student learning and the school and schooling process.

   However, many of my points relate to the strengths, limitations, and effective use of research reports using meta-analysis and meta-meta-analyses. 

   If we are going to translate this research to sound practices that impact student outcomes, educational leaders need to objectively and successfully understand, analyze, and apply the research so that they make sound system, school, staff, and student-level decisions.

   To do this, we are advocating for and described (see above) a Multiple-gated decision-making approach.

   In the end, schools and districts should not invest time, money, professional development, supervision, or other resources in programs or interventions that have not been fully validated for use with their students and/or staff. 

   Such investments are not fair to anyone—especially when they (a) do not delivering the needed results, (b) leave students further behind, and/or (c) fail and create staff resistance to “the next program”—which might, parenthetically, be the “right” program.
_ _ _ _ _

   I hope that this discussion has been useful to you.

   As always, I look forward to your comments. . . whether on-line or via e-mail.

   If I can help you in any of the areas discussed in this Blog, I am always happy to provide a free one-hour consultation conference call to help you clarify your needs and directions on behalf of your students, staff, school(s), and district.

[CLICK HERE for the full Blog message]

Best,

Howie

Saturday, December 22, 2018

The School Year in Review: Choosing High-Success Academic and Behavioral Strategies (Part I)


Committing to Educational Excellence by Learning from Hattie’s and SEL’s Limitations

[CLICK HERE for the full Blog message]

Dear Colleagues,

Introduction

   The holidays. . . the New Year. . . a time of reflection. . . a time of hope and joy and renewal.

   I would love to say I am feeling nostalgic.  But. . . I’m not.

   On a professional level, I’m dismayed.  I’m disappointed.  I’m determined.  And I know there is a lot of work to do to improve our schools in 2019.
_ _ _ _ _

   In preparing this piece, I read through all of the Blogs that I wrote this year.  I did this to “Review 2018” because—when you are preparing and writing two major messages each month, while maintaining a national consulting business (with almost 200 days per year “on the road”)—you tend to lose sight of what happened in January. . . never mind September or October.

   My Blog review revealed the following themes:

  • Theme 1: Choosing High-Success Initiatives.  Here, we discussed the importance of schools doing their own science-to-research “due diligence” so that they adopt and implement defensible and high-probability-of-success initiatives and programs on behalf of their students and staff.

We also critically reviewed the research of John Hattie—detailing the strengths and limitations of meta-analytic studies, and emphasizing that schools cannot take Hattie’s effect sizes and move directly to implementation.  Indeed, because meta-analysis pools many separate research studies together, these studies often have different methods, procedures, strategies, and implementation sequences. 

Thus, schools would not know exactly what to implement without critically evaluating the separate studies.

  • Theme 2: The Selling of Social-Emotional Learning (SEL).  As a specific example of the Theme above, we encouraged schools to critically look at the history and foundation of the Social-Emotional Learning (SEL) “movement” (including “mindfulness” practices) so that they understand its political history and motivation, recognize the flaws in its research and practice, and “step back” to reassess how to effectively improve students’ social, emotional, and behavioral skills and self-management abilities.
  • Theme 3: Preventing School Shootings.  Here, we suggested that schools need to go “Back to the Future” by reviewing past recommendations from previous years’ school shooting analyses. . . when re-evaluating their current school safety systems and approaches.  Clearly, this is especially important given the rash of school shootings during 2018.
  • Theme 4: School Discipline and Disproportionality.  Here, we reviewed the importance of proactive, scientifically-based, and multi-tiered school discipline approaches, as well as how to realistically, comprehensively, and pragmatically address the issue of disproportionality.
 _ _ _ _ _ _ _

Theme 1:  Choosing High-Success School Initiatives

   My very first Blog this year (January 13, 2018) focused on what we know about school improvement—based on evaluations from the “No Child Left Behind” (NCLB) years, and what districts and schools need to know about school improvement—from the science-to-practice approaches embedded in strategic planning.

   From an NCLB perspective, published studies consistently conclude that there are lots of school improvement strategies, but most educators do not know how to comprehensively analyze their school’s current strengths, weaknesses, and gaps so that they can strategically and systematically implement the most effective and efficient strategies that will build their school’s capacity—resulting in sustained student outcomes.

   We added a critical point to this first conclusion—that school improvement is contextual. 

   That is, some schools want to go from “great to greater.”  Some schools from “good to great.”  And some schools need to go from a Targeted or Comprehensive Support and Improvement level, respectfully, to a point where they are simply providing a consistent, foundationally sound level of good instruction.

   In addition, we emphasized that, in order for continuous school improvement and (especially) school turn-around to succeed, it needs to be done at each involved school and district site using coordinated and sustained activities that include: 

  • Ongoing local needs assessments and strategic planning science-to-practice processes; 
  • Local resource analyses and capacity-strengthening science-to-practice processes; and
  • Local and on-site organizational, staff development, consultation, and technical assistance science-to-practice processes.

   These “keys to success” clearly require professionals both employed at each school site, and in- or out-of-district consultants—all with the shared ability to use the strategic planning processes cited above to select the best services, supports, strategies, and interventions at the district, school, staff, and student levels to facilitate ongoing and sustained success.
_ _ _ _ _

   Below are the 2018 Blogs written in this theme area. . . with their titles, dates of publication, and web-links to the original message.

[CLICK on the Date below to link to the Original Blog]

January 13, 2018    Every School is in “School Improvement” Every Year:  Preparing for ESEA/ESSA–What Effective Schools Do to Continuously Improve . . . and What Ineffective Schools Need to do to Significantly Improve [Part I of II]

January 28, 2018   How Strategic Planning and Organizational Development is Done by Every School . . . Every Year:  An Introduction to Successful School-based Strategic Planning Science-to-Practice [Part II of II]

June 26, 2018   Learning from Another Gates Failure:  It’s Not Just the Money–It’s What You Accomplish with It.  How to Spend ESEA’s Title IV Money Wisely

July 21, 2018    Hattie Haters and Lovers:  Both Still Miss the Effective Implementation that Practitioners Need.  Critical Questions to Ask your “Hattie Consultant” Before You Sign the Contract

August 4, 2018   School Improvement, Strategic Planning, ESEA, and Multi-Tiered Services:  An Anthology of Previous Blogs.  Integrating Successful Research-to-Practice Strategies into the New School Year (Part I of II)

November 25, 2018  It’s Not Too Late to Change: The School Year’s Not Even Half Over.  Why Schools Fail to Act When their Students Fail

December 8, 2018  Reconsidering What Effective High Schools Do, and What Failing High Schools Miss:  Credit Recovery Programs Should be Strategic, Selective, Student-Focused, and Not the Only Game in Town
_ _ _ _ _

The Take-Aways

   Relative to continuous school improvement and—especially— improvement at the Targeted or Comprehensive Support and Improvement levels, respectfully, our Blogs looked at recent national reports and other valid and previously-established science-to-practice strategies that create a blueprint for school planning and effectiveness.

Our School Improvement Blueprint included the following components:

   School Vision
      Establish and Communicate a Clear Vision
      Help Staff Understand and Embrace the Need for Change

   Improvement Goals
      Prioritize Goals and Focus Areas
      Make Action Plans Based on Data
      Identify and Achieve a Few Early Wins
      Reduce Time Focused on Nonessentials

   Data-based Decision-Making
      Establish the Expectations for a Data Culture
      Adjust Instructional Practice through Visible Data
      Use Data Continually to Solve Problems   

   Establishing a Culture of Change
      Focus on Successful Tactics, Discontinue Unsuccessful Ones
      Break Rules and Norms, Take New Action
      Change Systems and Structures

   Effective Teachers and Leaders
      Make Necessary Replacements
      Attract, Select, and Retain Top Talent
      Build and Lead a Team of Leaders
      Ensure Ongoing Professional Growth Opportunities

   Instructional Excellence
      Align Instruction to Assessments and Standards
      Monitor and Improve Instructional Quality
      Develop and Deploy a Team of Instructional Leaders

   Strategic Partnerships
      Gain Support of Key Influencers
      Enlist Partner Organizations
_ _ _ _ _

   Critically, and as emphasized above, this blueprint should not be used as a static, one-size-fits-all menu.  Instead, needs and status assessments, resource analyses and coordination, and strategic planning and organizational development strategies are required to individualize the process for each district and school.

   For districts or schools in significant need of improvement, two questions are essential here:
  • With all that a school in improvement status needs to do, which of the possible strategies are the immediate, high-hit strategies that will begin the improvement process in a timely way? 
  • Once these high-hit strategies are identified; exactly what is the training, who and where are the targets; and what are the resources, implementation steps, and short- and long-term outcomes needed such that improvement begins, is established, and can be maintained over time?

[CLICK HERE for the full Blog message with the additional key Take-Aways from this Theme #1.]
_ _ _ _ _ _ _ _ _ _

Theme 2:  The Selling of Social-Emotional Learning

   One of the most notable examples of Theme #1 above is the SEL (Social-Emotional Learning) movement as politically powered by CASEL (the Collaborative for Academic, Social, and Emotional Learning). 

   While recognizing that most schools nationwide are doing “something” that they call “SEL,” this year’s Blog messages provided extensive information on (a) CASEL’s political and foundation-driven agenda, (b) the flaws and limitations in the research that it uses as a rationale for that agenda, and (c) the research-to-practice components of an SEL model that is focused on measurable and developmentally-sensitive multi-tiered social, emotional, and behavioral student skills.

   In the context of Theme #1, districts and schools are encouraged to take a “step back” off the SEL bandwagon, to critically review the research-to-practice multi-tiered components, and to reconfigure the strategies, resources, timelines, and training needed to effectively improve their student, staff, and school “return-on-investment.”

   Below are the 2018 Blogs written in this theme area. . . with their titles, dates of publication, and web-links to the original message.

 [CLICK on the Date below to link to the Original Blog]

February 10, 2018   The Folly and Frustration of Evaluating Schools and Staff Based on the Progress of Students with Significant Social, Emotional, and Behavioral Challenges: Understanding the Student, Home, and Community Factors that Impact Challenging Students

June 2, 2018  Making Mountains Out of Molehills:  Mindfulness and Growth Mindsets.  Critical Research Questions the Impact of Both

October 13, 2018  Social-Emotional Learning:  Education’s Newest Bandwagon . . . and the History of How We Got There (Part I).  Why Most Schools are not Implementing Scientifically-Sound Practices—Wasting Time and Resources

November 10, 2018  The SEL-ing of Social-Emotional Learning:  Education’s Newest Bandwagon. . . Science-to-Practice Goals, Flaws, and Cautions (Part II).  Why Schools Need to Re-Think, Re-Evaluate, Re-Load, and Re-Boot
_ _ _ _ _

The Take-Aways

   All students need to learn and demonstrate—at an appropriate developmental level—effective interpersonal, social problem-solving, conflict prevention and resolution, and emotional control and coping skills.  In the classroom, these skills are essential to maximizing their academic engagement and achievement, as well as their ability to collaborate and learn in cooperative and project-based learning groups.

   The “Good News” is that this is increasingly recognized across our educational communities. 

   The “Bad News” is that many schools are targeting (often due to CASEL’s advocacy), SEL goals and targets that involve constructs (instead of skills and behaviors) that are open to interpretation (hence, they are unreliable) and, hence, that cannot be measured or measured validly.

   The additional “Bad News” is that “SEL” has been “validated” by the popular press . . . using testimonials, “research” that would be rejected by the Editorial Board of virtually any professional publication, and data that will never demonstrate a causal relationship between school-based activities and student-based outcomes.

[CLICK HERE for the full Blog message with the additional key Take-Aways from this Theme #1.]
_ _ _ _ _ _ _ _ _ _ 

Summary

   Obviously, my primary goal in writing these Blogs is to help districts and schools to maximize the academic and social, emotional, and behavioral skills and competencies of all students. 

   In a multi-tiered context, this means that some students will need remediation, accommodation, and/or modification services, supports, and strategies when struggling academically or presenting with behavioral challenges.  In addition, other students will need strategic or intensive interventions as identified through data-based functional assessment problem-solving processes.

   But another goal is to add a science-to-practice perspective to some of the national reports, approaches, and beliefs that are published and accepted by others. . . sometimes without a full understanding of their history or implications, and sometimes based simply on the perceived “expertise” of the author or the organization sponsoring the work.

   Thus, a final goal is to help educators to “stop and think” and “take a step back” from the premature acceptance of a framework or program that either will not work with their students or will not work with any students.

   Time and resources are precious commodities.  When it comes to our students, staff, and schools, we all need to make sure that these commodities are used well, and that they have a high “return on investment.”  This means that—before implementation—we have validated that they have a high probability of success, that they can be and are implemented with integrity and the correct intensity, and that we are sensitively evaluating their short- and long-term outcomes.
_ _ _ _ _

   In Part II (coming in approximately two weeks), we will discuss and analyze the second set of 2018 themes:
  • Theme 3: Preventing School Shootings.  Here, we will encourage schools to go “Back to the Future” by reviewing past recommendations from previous years’ school shooting analyses when re-evaluating their current school safety systems and approaches.  Clearly, this is especially important given the rash of school shootings during 2018.
This discussion also will critically review—in the most depoliticized way possible—the Federal Commission on School Safety’s Final Report released less than four weeks ago on December 18, 2018.

  • Theme 4: School Discipline and Disproportionality.  Here, we will review the importance of proactive, scientifically-based, and multi-tiered school discipline approaches, as well as how to realistically, comprehensively, and pragmatically address the issue of disproportionality. . . especially with minority students and students with disabilities.
This theme will discuss the implications of the U.S. Department of Education’s December 21, 2018 rescission of the Obama-era guidance aimed at reducing racial discrimination when students are disciplined.  This was done officially by Secretary DeVos just three days after the release of the Federal Commission on School Safety’s Final Report which included this in its recommendations.

   Meanwhile, I hope that this information is useful to you.  Believe it or not, if you would like to discuss anything on an individual district, school, or agency level, I am (still) providing free one-hour conference calls even during this holiday season.

   Speaking of which, I hope that your Holidays were filled with happiness and joy.  Please accept my best wishes for the upcoming New Year !!!

Best,

Howie

[CLICK HERE for the full Blog message]