Avoiding Mistakes, Best Practices, and Pilot Projects
[CLICK HERE to read this Blog on the Project ACHIEVE Webpage]
Dear Colleagues,
Introduction: Going Backwards to Move Forward
Districts and schools are always in the process of “buying stuff.”
They are constantly acquiring curricula, assessments, interventions, technology, professional development, consultants.
These acquisitions
should not be chosen randomly or based on testimonials or marketing promises describing
“research” that is methodologically unsound and that does not demonstrate objective
and consistently meaningful student, staff, and school outcomes.
_ _ _ _ _
In Part I of this two-part series, we encouraged districts and schools to make objective, data-driven decisions in these areas, recommending specific definitions and standards. We used the commercials at the Super Bowl as a metaphorical guide.
February 24, 2024
What Super Bowl Commercials Teach Education About Media and Product Literacy: The Language and Process that Helps Schools Vet New Products and Interventions (Part I)
[CLICK HERE to Read and
Review]
_ _ _ _ _
This Blog emphasized and outlined—applying the goals and questions within a sound middle school Media Literacy program—why educators need to be both Media and Product Literate when reviewing and evaluating marketing materials or on-line reviews of curricula, instructional or intervention products, assessment or evaluation tools, professional development programs, or direct and indirect consultation services for purchase.
We described in detail three common terms used to “validate” these products: “Scientifically-based,” “Evidence-based,” and “Research-based.”
Here, we asserted the importance that educators understand these terms’ respective (a) definitions, histories, and differences; and (b) the questions and objective criteria needed to determine that a product can validly provide the student, staff, or school outcomes that it asserts.
We understand that Social Media and Product Literacy—and their accompanying reviews—take time.
But especially for purchases that will be used or implemented for five or more years (e.g., a new reading, math, or science curriculum or on-line program; a new district Student Information or Data Management System), the review time avoids costly mistakes, and is essential to long-term student, staff, and school success.
At the end of the Blog Part I, we referenced a recent January 19, 2024 Education Week article that discussed the “Five Mistakes for Educators to Avoid When Picking ‘Evidence-Based’ Programs.”
In this Blog Part
II, we explore this article and its implications to further assist districts
and schools before they acquire “new stuff.”
_ _ _ _ _ _ _ _ _ _
Providing Context to Move Forward
As a national consultant, the selection and implementation of evidence-based programs and practices is a frequent concern as I help districts and schools across the country implement effective academic and social, emotional, and behavioral strategies for their students and staff.
Two on-line sources of evidence-based programs are the What Works Clearinghouse for education, and the Evidence-Based Practices Resource Center for mental health.
But, in the face of other ways to conduct sound research that validates different strategies and interventions, both Centers almost exclusively use their “Gold Standard” approach when designating programs or practices as evidence-based.
This approach typically emphasizes the use of Randomized Control Trials (RCT) that demonstrate that a specific program or practice is causally (not correlationally) responsible for targeted student outcomes.
In an RCT study, students are randomly assigned to either a Treatment Group (that receives the to-be-evaluated program or practice) or a Control or Comparison Group (that either does not receive the program or practice, or receives an innocuous “placebo” approach that is irrelevant to the targeted outcomes).
My point here is not to get into a heavy discussion of educational research.
My point is that—if the above description already has your head spinning, and you are responsible for selecting a strategy or intervention for your classroom, grade-level, department, or school—you may avoid the technical research and then choose the wrong intervention.
Hence, the “five
mistakes” from the Education Week article.
_ _ _ _ _ _ _ _ _ _
Mistakes to Avoid When Choosing Evidence-Based Programs
The five mistakes that educators need to be mindful of when evaluating and choosing an evidence-based program, curriculum, or intervention are:
·
Equating Research Quality with Program Quality
· Looking
only at the summary (or rating)
·
Focusing too much on effect size
· Forgetting
whom the program serves
· Taking ‘no effect’ for a conclusive answer
To summarize:
Even when a program, curriculum, or intervention meets the “gold standard” of research, this “designation” may say more about the quality of the research than the quality of the approach.
This is because the research often does not tease out exactly why the approach was successful—especially when the program, curriculum, or intervention is complex and multi-faceted.
Indeed, there may be elements of a program that are unsuccessful, but they may be masked by the statistically positive effects of another element that compensates for these faulty elements as the results are pooled.
Given this, educators must look past the ways that, for example, the What Works Clearinghouse organizes the recommendations in its summaries:
· For Individual Studies and Intervention Reports: Strong Evidence (Tier 1), Moderate Evidence (Tier 2), Promising Evidence (Tier 3), Uncertain Effects, and Negative Effects; and
· For Practice Guides: Strong Evidence (Tier 1), Moderate Evidence (Tier 2), Promising Evidence (Tier 3), or Evidence that Demonstrates a Rationale for a Recommendation (Tier 4). . .
and really read the study(ies) reviewed in a research
report, or the methods described in a published research article.
_ _ _ _ _
Educators must also understand what an effect size represents.
One of the most common effect size calculations is Cohen’s d. Cohen suggests that a d = 0.2 is a “Small” effect size, a 0.5 d is a “Medium” effect size, and a 0.8 or greater is a “Large” effect size.
But what does this mean?
Statistically, a Small (0.2) effect size means that 58% of the Control Group in a study—on the scores or the ratings used to evaluate the program, curriculum, or intervention—fell below the Targeted, Participating, Treatment, Intervention, or Experimental Group.
A Medium (0.5) effect size means that 69% of the Control Group in a study— on the scores or the ratings used to evaluate the program, curriculum, or intervention—fell below the Targeted, Participating, Treatment, Intervention, or Experimental Group.
A Large (0.8) effect size means that 79% of the Control Group in a study—on the scores or the ratings used to evaluate the program, curriculum, or intervention—fell below the Targeted, Participating, Treatment, Intervention, or Experimental Group.
Thus, even with a Large effect size, 21% (i.e., one out of every five students) of a Control Group—that did not participate in, for example, a new reading or social-emotional learning program—showed the same positive progress or response as the group of students who actually participated in the program.
Critically, even with
a 1.4 effect size, 8% of a Control Group demonstrated the same progress
or response to a new program as the students who received that program.
_ _ _ _ _
Moving on: Even when the research for a program, curriculum, or intervention is positive, educators still need to ask the following essential questions:
· “Was this program, curriculum, or intervention validated for students, staff, and schools like mine”;
· “Do I have the time, resources, and support to implement this approach in my setting”; and
·
“Does my setting’s circumstances (e.g., the need
for immediate change because of a crisis situation) match those present in the
approach’s validating research?”
_ _ _ _ _
Finally, when a program, curriculum, or intervention was not validated, educators still need to read the research.
As alluded to
above, sometimes there are other research approaches that might validate the
approach that are not preferred or accepted by the What Works Clearinghouse
or the Evidence-Based Practices Resource Center.
_ _ _ _ _
The “bottom line” in
all of this is that educators must be committed to (a) objective and
data-driven decisions relative to the new programs, curricula, or interventions
that they need; (b) they need to understand the methodological and statistical
elements that go into the research that have evaluated the approaches they are considering;
(c) they need to ensure that the approaches are well-matched to their students,
staff, and/or schools; and (d) they need to make sure that they have the time
and resources needed to implement the finally-selected approach with integrity
and its needed intensity.
_ _ _ _ _ _ _ _ _ _
Post-Script: Avoiding “Best Practices” and “Pilot Projects”
In a February 12, 2024 article in Fast Company, Keyanna Schmiedl explained “Why it’s time to stop saying ‘best practices’ in the business world.”
[CLICK HERE to Link to this Article]
Discussing her preference for the term “Promising Practices” over “Best Practices,” she stated:
Language is crucial to leadership.
A single word or phrase can change the tone of an
entire statement, and thus, the message employees take away from it. Those
takeaways then develop into attitudes, which influence company culture and
productivity.
Therein lies the issue with the term best practices.
“Best” doesn’t leave room for flexibility and conversation. “Best” implies
there’s only one solution or set of solutions to a problem, and that those
solutions should remain unchallenged. And when you aren’t ready to challenge
the status quo, you aren’t going to make any progress.
According to Salesforce, 86% of employees and
executives believe a lack of collaboration or ineffective
communication is the cause of workplace failures.
By adopting an ethos of promising
practices—encouraging leaders to build with their employees, rather than simply
instructing them on what they think is best—leaders can create the culture of
collaboration and accountability needed to foster success.
(P)romising practices empower companies to lead with a
mindset of humility and growth. Leaders can say, “This practice is hopeful. It
brought good results for us, and we think it can bring good results for you,
too.” Then, other organizations can take that baseline method and make it work
for them.
Taking a holistic approach and incorporating the
employee voice is what leads to more effective problem-solving, and therefore,
the development of promising practices that work better for everyone.
_ _ _ _ _
Schmiedl’s comments apply directly to districts, schools, and educational leaders.
However, I recommend two important semantic changes as additional reasons to retire the term “Best Practices” in education.
The first semantic change is to change Schmiedl’s “baseline method” term to “evidence-based blueprints.”
In a science-to-practice context—and incorporating this Blog Series’ earlier discussions—I consistently describe the interdependent components that guide successful school change or improvement as existing within “evidence-based blueprints.” These blueprints cover, for example, strategic planning, differentiated instruction, the continuum of academic or social-emotional interventions, and multi-tiered systems of support.
They are “evidence-based” because all of my work is (through U.S. Substance Abuse and Mental Health Services Administration—SAMHSA) or uses research-to-practice components that are field-proven. That is, across large numbers of schools in diverse settings across the country, objective evaluations have demonstrated our consistent and meaningful student, staff, and school outcomes.
They are “blueprints” because, as above, they identify the essential interdependent components needed for successful implementation, but give schools the flexibility (a) to include complementary strategies that add depth and breadth; (b) to sequence their activities in strategic and student-need-driven ways; and (c) to align their professional development, coaching, and evaluation approaches to maximize existing resources and staff capabilities.
The second semantic change—which still supports Schmiedl’s recommendation that we retire the term “Best Practices,” is to replace it with the term “Effective Practices.”
The two related reasons are:
· Many educators hear the term “Best Practices,” think that the recommended practices will make them work “over and above” what really is necessary, and ask, “Why can’t we just do what is needed to make us successful? Why do we have to go ‘above and beyond’?”
Quite simply: When educators hear
“Effective Practices,” they are more comfortable that the recommended practices
address the questions above.
_ _ _ _ _
· Many administrators and school board members hear the term “Best Practices,” think that the recommended practices will be overly expensive, and ask, “Why are you selling us a Lexus, when all we need is a Toyota?”
Once again, when they hear
“Effective Practices,” they are comfortable that the costs will result in the
expected outcomes, and a lesser amount might undercut these outcomes.
_ _ _ _ _
Finally, as long as we are retiring the term “Best Practices,” let’s also reconsider the use of Pilot Projects.
In my experience, districts and schools most often implement Pilot Projects when a program or approach:
· Is being pushed by small groups of educators, and their administrators really are not terribly interested, but they nonetheless do not want to completely discourage the group or tell them “no” straight out;
· Has questionable research or is unproven with the projected group(s) of students, staff, or schools; or the district or school
· Doesn’t have (and may never have) the money or resources to go “all-in” on the program or approach.
But Pilot Projects are also often recommended when well-validated programs, curricula, or interventions—that would have long-term positive impacts on students, staff, and schools—are suggested, and the administrators in question really don’t like the approach (or, sometimes, the individuals making the proposal).
Here, the administrators want to appear “open to new ideas,” but they really are hoping that the pilot will fail or the individuals will become discouraged.
Even when implemented and successful, most pilot projects rarely are scaled up. This is because:
· Those (usually, school staff) who do not want a successful pilot project to expand to their school, department, or grade level, find ways to question, minimize, reject, or cast doubt on its ability to be scaled-up or to work “in our school with our staff and our students;” and
· Those (usually, district administrators) who do not want the successful pilot project to expand, cite the scale-up’s resources and costs, and its “competition” with other district priorities as reasons to not take the next steps.
As an outside consultant, given the circumstances above and—especially—the low potential for eventual system-wide scale-up, I almost never agree to work in a district on a “pilot project.”
For a district-employed staff, know that your involvement in a pilot project may result in angry, jealous, or slighted colleagues. . . especially when they perceive you as receiving “special” attention, releases, resources, or privileges.
On a semantic
level, I understand that some programs, curricula, or interventions need to be
“Field-Tested”. . . so let’s use this term. The term “Pilot Project” simply
carries too much baggage. . . and this baggage, once again, predicts that the
approach will never be fully implemented to benefit the students, staff, and
schools that it might.
_ _ _ _ _ _ _ _ _ _
Summary
Building on Part I of this two-part Series, this Blog Part II first discussed the evaluative approaches used by the What Works Clearinghouse for education and the Evidence-Based Practices Resource Center for mental health to rate the implementation of specific programs, curricula, and interventions in districts, schools, and other educational settings.
We then summarized the five “mistakes” that educators should avoid when choosing evidence-based programs. These mistakes are:
·
Equating Research Quality with Program Quality
· Looking
only at the summary (or rating)
·
Focusing too much on effect size
· Forgetting
whom the program serves
· Taking ‘no effect’ for a conclusive answer
Finally, we
expanded the discussion, addressing why education should change the term “Best
Practices” to “Effective Practices,” and why educators should be wary when
administrators give permission for “Pilot Projects” in lieu of the full,
system-wide implementation of well-validated programs, curricula, or
interventions.
_ _ _ _ _
A Funding Opportunity: Speaking of Evidence-based Programs
When districts or schools are interested in implementing my work—especially when funding is dwindling or short, I often partner with them and help them write (often, five-year) federal grants from the U.S. Department of Education.
To this end:
A new $4 million grant program is coming up in a few months that focuses on moderate to large school districts with at least 25 elementary schools.
As we can submit multiple grants from different districts, if you are interested in discussing this grant and a partnership with me, call (813-495-3318) or drop me an e-mail as soon as possible (howieknoff1@projectachieve.info).
A separate five-year $4 million grant program will likely be announced a year from now. This program is open to districts of all sizes.
If you are interested, once again, it is not too early to talk.
BOTH grant programs focus on (a) school safety, climate, and discipline; (b) classroom relationships, behavior management, and engagement; and (c) teaching students interpersonal, conflict prevention and resolution, social problem-solving, and emotional awareness, control, communication, and coping skills and interactions.
If we partner, I will write the bulk of the Grant proposal (at no cost), and guide you through its submission.
Beyond these grants, if you are interested in my work for your school or educational setting, I am happy to provide a free consultation for you and your team to discuss needs, current status, goals, and possible approaches.
Call me or drop me an e-mail, and let’s get started.
Best,
Howie
[CLICK HERE to read this Blog on the Project
ACHIEVE Webpage]
No comments:
Post a Comment