Saturday, March 16, 2024

Helping Schools Pick and Implement the Best Evidence-Based Programs (Part II)

Avoiding Mistakes, Best Practices, and Pilot Projects

[CLICK HERE to read this Blog on the Project ACHIEVE Webpage]

Dear Colleagues,

Introduction: Going Backwards to Move Forward

   Districts and schools are always in the process of “buying stuff.”

   They are constantly acquiring curricula, assessments, interventions, technology, professional development, consultants.

   These acquisitions should not be chosen randomly or based on testimonials or marketing promises describing “research” that is methodologically unsound and that does not demonstrate objective and consistently meaningful student, staff, and school outcomes.

_ _ _ _ _

   In Part I of this two-part series, we encouraged districts and schools to make objective, data-driven decisions in these areas, recommending specific definitions and standards. We used the commercials at the Super Bowl as a metaphorical guide.

February 24, 2024

What Super Bowl Commercials Teach Education About Media and Product Literacy: The Language and Process that Helps Schools Vet New Products and Interventions (Part I)

[CLICK HERE to Read and Review]

_ _ _ _ _

   This Blog emphasized and outlined—applying the goals and questions within a sound middle school Media Literacy program—why educators need to be both Media and Product Literate when reviewing and evaluating marketing materials or on-line reviews of curricula, instructional or intervention products, assessment or evaluation tools, professional development programs, or direct and indirect consultation services for purchase.

   We described in detail three common terms used to “validate” these products: “Scientifically-based,” “Evidence-based,” and “Research-based.”

   Here, we asserted the importance that educators understand these terms’ respective (a) definitions, histories, and differences; and (b) the questions and objective criteria needed to determine that a product can validly provide the student, staff, or school outcomes that it asserts.

   We understand that Social Media and Product Literacy—and their accompanying reviews—take time.

   But especially for purchases that will be used or implemented for five or more years (e.g., a new reading, math, or science curriculum or on-line program; a new district Student Information or Data Management System), the review time avoids costly mistakes, and is essential to long-term student, staff, and school success.

   At the end of the Blog Part I, we referenced a recent January 19, 2024 Education Week article that discussed the “Five Mistakes for Educators to Avoid When Picking ‘Evidence-Based’ Programs.”

   In this Blog Part II, we explore this article and its implications to further assist districts and schools before they acquire “new stuff.”

_ _ _ _ _ _ _ _ _ _

Providing Context to Move Forward

   As a national consultant, the selection and implementation of evidence-based programs and practices is a frequent concern as I help districts and schools across the country implement effective academic and social, emotional, and behavioral strategies for their students and staff.

   Two on-line sources of evidence-based programs are the What Works Clearinghouse for education, and the Evidence-Based Practices Resource Center for mental health.

   But, in the face of other ways to conduct sound research that validates different strategies and interventions, both Centers almost exclusively use their “Gold Standard” approach when designating programs or practices as evidence-based.

   This approach typically emphasizes the use of Randomized Control Trials (RCT) that demonstrate that a specific program or practice is causally (not correlationally) responsible for targeted student outcomes.

   In an RCT study, students are randomly assigned to either a Treatment Group (that receives the to-be-evaluated program or practice) or a Control or Comparison Group (that either does not receive the program or practice, or receives an innocuous “placebo” approach that is irrelevant to the targeted outcomes).

   My point here is not to get into a heavy discussion of educational research.

   My point is that—if the above description already has your head spinning, and you are responsible for selecting a strategy or intervention for your classroom, grade-level, department, or school—you may avoid the technical research and then choose the wrong intervention.

   Hence, the “five mistakes” from the Education Week article.

_ _ _ _ _ _ _ _ _ _

Mistakes to Avoid When Choosing Evidence-Based Programs

   The five mistakes that educators need to be mindful of when evaluating and choosing an evidence-based program, curriculum, or intervention are:

·       Equating Research Quality with Program Quality

·       Looking only at the summary (or rating)

·       Focusing too much on eect size

·       Forgetting whom the program serves

·       Taking ‘no eect for a conclusive answer

   To summarize:

   Even when a program, curriculum, or intervention meets the “gold standard” of research, this “designation” may say more about the quality of the research than the quality of the approach.

   This is because the research often does not tease out exactly why the approach was successful—especially when the program, curriculum, or intervention is complex and multi-faceted.

   Indeed, there may be elements of a program that are unsuccessful, but they may be masked by the statistically positive effects of another element that compensates for these faulty elements as the results are pooled.

   Given this, educators must look past the ways that, for example, the What Works Clearinghouse organizes the recommendations in its summaries:

·       For Individual Studies and Intervention Reports: Strong Evidence (Tier 1), Moderate Evidence (Tier 2), Promising Evidence (Tier 3), Uncertain Effects, and Negative Effects; and 

·       For Practice Guides: Strong Evidence (Tier 1), Moderate Evidence (Tier 2), Promising Evidence (Tier 3), or Evidence that Demonstrates a Rationale for a Recommendation (Tier 4). . .

and really read the study(ies) reviewed in a research report, or the methods described in a published research article.

_ _ _ _ _

   Educators must also understand what an effect size represents.

   One of the most common effect size calculations is Cohen’s d. Cohen suggests that a d = 0.2 is a “Small” effect size, a 0.5 d is a “Medium” effect size, and a 0.8 or greater is a “Large” effect size.

   But what does this mean?

   Statistically, a Small (0.2) effect size means that 58% of the Control Group in a study—on the scores or the ratings used to evaluate the program, curriculum, or intervention—fell below the Targeted, Participating, Treatment, Intervention, or Experimental Group.

   A Medium (0.5) effect size means that 69% of the Control Group in a study— on the scores or the ratings used to evaluate the program, curriculum, or intervention—fell below the Targeted, Participating, Treatment, Intervention, or Experimental Group.

  A Large (0.8) effect size means that 79% of the Control Group in a study—on the scores or the ratings used to evaluate the program, curriculum, or intervention—fell below the Targeted, Participating, Treatment, Intervention, or Experimental Group.

   Thus, even with a Large effect size, 21% (i.e., one out of every five students) of a Control Group—that did not participate in, for example, a new reading or social-emotional learning program—showed the same positive progress or response as the group of students who actually participated in the program.

   Critically, even with a 1.4 effect size, 8% of a Control Group demonstrated the same progress or response to a new program as the students who received that program.

_ _ _ _ _

   Moving on: Even when the research for a program, curriculum, or intervention is positive, educators still need to ask the following essential questions:

·       “Was this program, curriculum, or intervention validated for students, staff, and schools like mine”; 

·       “Do I have the time, resources, and support to implement this approach in my setting”; and 

·       “Does my setting’s circumstances (e.g., the need for immediate change because of a crisis situation) match those present in the approach’s validating research?”

_ _ _ _ _

   Finally, when a program, curriculum, or intervention was not validated, educators still need to read the research.

   As alluded to above, sometimes there are other research approaches that might validate the approach that are not preferred or accepted by the What Works Clearinghouse or the Evidence-Based Practices Resource Center.

_ _ _ _ _

   The “bottom line” in all of this is that educators must be committed to (a) objective and data-driven decisions relative to the new programs, curricula, or interventions that they need; (b) they need to understand the methodological and statistical elements that go into the research that have evaluated the approaches they are considering; (c) they need to ensure that the approaches are well-matched to their students, staff, and/or schools; and (d) they need to make sure that they have the time and resources needed to implement the finally-selected approach with integrity and its needed intensity.

_ _ _ _ _ _ _ _ _ _

Post-Script: Avoiding “Best Practices” and “Pilot Projects

   In a February 12, 2024 article in Fast Company, Keyanna Schmiedl explained “Why it’s time to stop saying ‘best practices’ in the business world.”

[CLICK HERE to Link to this Article]

   Discussing her preference for the term “Promising Practices” over “Best Practices,” she stated:

Language is crucial to leadership.

 

A single word or phrase can change the tone of an entire statement, and thus, the message employees take away from it. Those takeaways then develop into attitudes, which influence company culture and productivity.

 

Therein lies the issue with the term best practices. “Best” doesn’t leave room for flexibility and conversation. “Best” implies there’s only one solution or set of solutions to a problem, and that those solutions should remain unchallenged. And when you aren’t ready to challenge the status quo, you aren’t going to make any progress.

 

According to Salesforce, 86% of employees and executives believe a lack of collaboration or ineffective communication is the cause of workplace failures.

 

By adopting an ethos of promising practices—encouraging leaders to build with their employees, rather than simply instructing them on what they think is best—leaders can create the culture of collaboration and accountability needed to foster success.

 

(P)romising practices empower companies to lead with a mindset of humility and growth. Leaders can say, “This practice is hopeful. It brought good results for us, and we think it can bring good results for you, too.” Then, other organizations can take that baseline method and make it work for them.

 

Taking a holistic approach and incorporating the employee voice is what leads to more effective problem-solving, and therefore, the development of promising practices that work better for everyone.

_ _ _ _ _

   Schmiedl’s comments apply directly to districts, schools, and educational leaders.

   However, I recommend two important semantic changes as additional reasons to retire the term “Best Practices” in education.

   The first semantic change is to change Schmiedl’s “baseline method” term to “evidence-based blueprints.”

   In a science-to-practice context—and incorporating this Blog Series’ earlier discussions—I consistently describe the interdependent components that guide successful school change or improvement as existing within “evidence-based blueprints.” These blueprints cover, for example, strategic planning, differentiated instruction, the continuum of academic or social-emotional interventions, and multi-tiered systems of support.

   They are “evidence-based” because all of my work is (through U.S. Substance Abuse and Mental Health Services Administration—SAMHSA) or uses research-to-practice components that are field-proven. That is, across large numbers of schools in diverse settings across the country, objective evaluations have demonstrated our consistent and meaningful student, staff, and school outcomes.

   They are “blueprints” because, as above, they identify the essential interdependent components needed for successful implementation, but give schools the flexibility (a) to include complementary strategies that add depth and breadth; (b) to sequence their activities in strategic and student-need-driven ways; and (c) to align their professional development, coaching, and evaluation approaches to maximize existing resources and staff capabilities.

   The second semantic change—which still supports Schmiedl’s recommendation that we retire the term “Best Practices,” is to replace it with the term “Effective Practices.”

   The two related reasons are:

·       Many educators hear the term “Best Practices,” think that the recommended practices will make them work “over and above” what really is necessary, and ask, “Why can’t we just do what is needed to make us successful? Why do we have to go ‘above and beyond’?” 

Quite simply: When educators hear “Effective Practices,” they are more comfortable that the recommended practices address the questions above.

_ _ _ _ _

·       Many administrators and school board members hear the term “Best Practices,” think that the recommended practices will be overly expensive, and ask, “Why are you selling us a Lexus, when all we need is a Toyota?” 

Once again, when they hear “Effective Practices,” they are comfortable that the costs will result in the expected outcomes, and a lesser amount might undercut these outcomes.

_ _ _ _ _

   Finally, as long as we are retiring the term “Best Practices,” let’s also reconsider the use of Pilot Projects.

   In my experience, districts and schools most often implement Pilot Projects when a program or approach:

·       Is being pushed by small groups of educators, and their administrators really are not terribly interested, but they nonetheless do not want to completely discourage the group or tell them “no” straight out; 

·       Has questionable research or is unproven with the projected group(s) of students, staff, or schools; or the district or school

·       Doesn’t have (and may never have) the money or resources to go “all-in” on the program or approach.

   But Pilot Projects are also often recommended when well-validated programs, curricula, or interventions—that would have long-term positive impacts on students, staff, and schools—are suggested, and the administrators in question really don’t like the approach (or, sometimes, the individuals making the proposal).

   Here, the administrators want to appear “open to new ideas,” but they really are hoping that the pilot will fail or the individuals will become discouraged.

   Even when implemented and successful, most pilot projects rarely are scaled up. This is because: 

·       Those (usually, school staff) who do not want a successful pilot project to expand to their school, department, or grade level, find ways to question, minimize, reject, or cast doubt on its ability to be scaled-up or to work “in our school with our staff and our students;” and

·       Those (usually, district administrators) who do not want the successful pilot project to expand, cite the scale-up’s resources and costs, and its “competition” with other district priorities as reasons to not take the next steps.

   As an outside consultant, given the circumstances above and—especially—the low potential for eventual system-wide scale-up, I almost never agree to work in a district on a “pilot project.”

   For a district-employed staff, know that your involvement in a pilot project may result in angry, jealous, or slighted colleagues. . . especially when they perceive you as receiving “special” attention, releases, resources, or privileges.

   On a semantic level, I understand that some programs, curricula, or interventions need to be “Field-Tested”. . . so let’s use this term. The term “Pilot Project” simply carries too much baggage. . . and this baggage, once again, predicts that the approach will never be fully implemented to benefit the students, staff, and schools that it might.

_ _ _ _ _ _ _ _ _ _

Summary

   Building on Part I of this two-part Series, this Blog Part II first discussed the evaluative approaches used by the What Works Clearinghouse for education and the Evidence-Based Practices Resource Center for mental health to rate the implementation of specific programs, curricula, and interventions in districts, schools, and other educational settings.

   We then summarized the five “mistakes” that educators should avoid when choosing evidence-based programs. These mistakes are:

·       Equating Research Quality with Program Quality

·       Looking only at the summary (or rating)

·       Focusing too much on eect size

·       Forgetting whom the program serves

·       Taking ‘no eect for a conclusive answer

   Finally, we expanded the discussion, addressing why education should change the term “Best Practices” to “Effective Practices,” and why educators should be wary when administrators give permission for “Pilot Projects” in lieu of the full, system-wide implementation of well-validated programs, curricula, or interventions.

_ _ _ _ _

A Funding Opportunity: Speaking of Evidence-based Programs

   When districts or schools are interested in implementing my work—especially when funding is dwindling or short, I often partner with them and help them write (often, five-year) federal grants from the U.S. Department of Education.

   To this end:

   A new $4 million grant program is coming up in a few months that focuses on moderate to large school districts with at least 25 elementary schools.

   As we can submit multiple grants from different districts, if you are interested in discussing this grant and a partnership with me, call (813-495-3318) or drop me an e-mail as soon as possible (howieknoff1@projectachieve.info).

   A separate five-year $4 million grant program will likely be announced a year from now. This program is open to districts of all sizes.

   If you are interested, once again, it is not too early to talk.

   BOTH grant programs focus on (a) school safety, climate, and discipline; (b) classroom relationships, behavior management, and engagement; and (c) teaching students interpersonal, conflict prevention and resolution, social problem-solving, and emotional awareness, control, communication, and coping skills and interactions.

   If we partner, I will write the bulk of the Grant proposal (at no cost), and guide you through its submission.

   Beyond these grants, if you are interested in my work for your school or educational setting, I am happy to provide a free consultation for you and your team to discuss needs, current status, goals, and possible approaches.

   Call me or drop me an e-mail, and let’s get started.

Best,

Howie

[CLICK HERE to read this Blog on the Project ACHIEVE Webpage]

Saturday, February 24, 2024

What Super Bowl Commercials Teach Education About Media and Product Literacy

The Language and Process that Helps Schools Vet New Products and Interventions (Part I)

[CLICK HERE to read this Blog on the Project ACHIEVE Webpage]

 

Dear Colleagues,

Introduction: The Big Game

   Let’s be honest.

   Taylor Swift aside, it seems that Super Bowl viewers are divided by those watching the game versus the commercials.

   And the commercials in this month’s (2024) Super Bowl seemed to bring out more “Stars” than usual: Ben Affleck, Matt Damon, Tom Brady, JLo, Jennifer Aniston, David Schwimmer, Jenna Ortega, Chris Pratt, Addison Rae, Jelly Roll, Judge Judy, Ice Spice, Lionel Messi, Kate McKinnon, Vince Vaughn, Quinta Brunson, Wayne Gretzky, Christopher Walken, and more.

   Stars make commercials memorable. . . using their presence to implicitly or explicitly endorse products. And this gives these products traction, and traction sells products.

   No one really cares if the Star has used or validated the quality of the product. In fact, in the final analysis, some “great” Stars endorse “lousy” products.

_ _ _ _ _

   Super Bowl commercials use language strategically.

   The most strategic language in a commercial is the slogan. Great slogans remain in the brain forever, and they are immediately (re)associated with their product. . . even when the slogan hasn’t been used for a decade or more.

   For example, for the Baby Boomers out there. . . what products are associated with the following slogans? (Sorry, Gen Z’s, you’ll have to Google these on your own):

·       “I can’t believe I ate the whole thing.”

·       “I bet you can’t eat just one.”

·       “Where’s the beef?”

·       “You deserve a break today.”

·       “Put a tiger in your tank.”

·       “Cleans like a white tornado.”

·       “What happens in Vegas, stays in Vegas.”

·       “America runs on Dunkin’.”

_ _ _ _ _

   When a slogan “catches on” in the popular vernacular, it pays unexpected (pun intended) product dividends. . . literally.

   Just like the “Stars” above, a popular Slogan sells products. But the slogan truly has nothing to do with the quality of the product.

   Regardless of Stars and Slogans, the quality of a product is based on whether it “does the job” effectively, efficiently, and in a cost-effective way. While the Star and Slogan may be responsible for the first purchase, the commitment to re-purchase the product and become a “lifetime user” is based on data, efficacy, and customer experience.

   And this is true whether the product costs $5.00 or $50,000.

_ _ _ _ _ _ _ _ _ _

Media Psychology, Media Literacy, Product Literacy, and Buying New Educational Stuff

   Media Psychology is a “newer” branch of psychology that examines the ways people are impacted by media and technology. Consumer psychology, meanwhile, studies how people’s thoughts, beliefs, emotions, and perceptions influence what they buy and use.

   In today’s digital, on-line world, these two areas are virtually (sorry. . .) interdependent.

   So what does all of this have to do with education?

   Educators. . . on their own behalf, and on behalf of their students, schools, districts, and/or other educational settings. . . are consumers who are influenced by media.

   We purchase curricula, on-line instructional or intervention products, assessment or evaluation tools, professional development programs, and direct and indirect consultation services.

   We, sometimes, are influenced by famous and notable authors, speakers, researchers, and product developers (the “Stars”) and their status or testimonials. . . and sometimes by Slogans, marketing campaigns, and “research” that would never be accepted by the Editorial Board of any refereed professional publication in our field.

   Just like Middle School students (see below), educators need to be trained in Media Literacy so they can be Product Literate.

_ _ _ _ _

Media Literacy From Middle School Students to “Middle School” Educators

   A November 14, 2023 article in U.S. News & World Report, stated:

The rise of the internet and social media makes it easier than ever to access information – and that includes information that’s false or misleading.

 

Add to that increasing political polarization, eroding trust in mainstream media and institutions, the rise of artificial intelligence products and a tendency to dismiss any information one doesn’t agree with, and many agree that the ability to critically evaluate information is a vital subject for schools.

 

When you get to AI-generated content, it makes fact-checking even more necessary because AI-generated images, texts and narratives can be inaccurate, biased, plagiarized or entirely fabricated. It can even be created to intentionally spread disinformation. We’re talking about media literacy, and within media literacy is news literacy.


The need for better media literacy education in schools is nearing a breaking point, some experts say, and health professionals have made public pleas saying as much. In May 2023, U.S. Surgeon General Admiral Vivek H. Murthy called on lawmakers to support media literacy in schools, while the American Psychological Association issued a health advisory recommending teenagers be trained in social media literacy before using the platforms.

 

Media literacy education teaches students to think critically about media messages and to create their own media “thoughtfully and conscientiously,” according to Media Literacy Now. According to (its annual) report: Ohio, New Jersey, Delaware and Florida require K-12 media literacy standards. New Jersey, Delaware and Texas require K-12 media literacy instruction. Illinois, Colorado, Massachusetts Nebraska and Connecticut require some limited form of media literacy instruction. Nebraska and Minnesota require standards in some grades and subject.

 

What Does a Good Media Literacy Program Look Like?

 

A good media literacy program starts by teaching students how to ask good questions and become interrogators of information, experts say. By the end of middle school, students should be able to read laterally, meaning they can use the internet to check the veracity of news they see online.

_ _ _ _ _

   Common Sense Media, in a June 4, 2020 article, expanded on the goals of a sound media literacy program and the resulting questions that Middle School students should be able to answer.

   Upon completion, the program should help students:

  • Learn to think critically. As kids evaluate media, they decide whether the messages make sense, why certain information was included, what wasn't included, and what the key ideas are. They learn to use examples to support their opinions. Then they can make up their own minds about the information based on knowledge they already have.
  • Become a smart consumer of products and informationMedia literacy helps kids learn how to determine whether something is credible. It also helps them determine the "persuasive intent" of advertising and resist the techniques marketers use to sell products.
  • Recognize point of view. Every creator has a perspective. Identifying an author's point of view helps kids appreciate different perspectives. It also helps put information in the context of what they already know -- or think they know.
  • Identify the role of media in our culture. From celebrity gossip to magazine covers to memes, media is telling us something, shaping our understanding of the world, and even compelling us to act or think in certain ways.
  • Understand the author's goal. What does the author want you to take away from a piece of media? Is it purely informative, is it trying to change your mind, or is it introducing you to new ideas you've never heard of?

_ _ _ _ _

   The key questions that students should learn to ask are:

Who created this? Was it a company? Was it an individual? (If so, who?) Was it a comedian? Was it an artist? Was it an anonymous source? Why do you think that?

 

Why did they make it? Was it to inform you of something that happened in the world (for example, a news story)? Was it to change your mind or behavior (an opinion essay or a how-to)? Was it to make you laugh (a funny meme)? Was it to get you to buy something (an ad)? Why do you think that?

 

Who is the message for? Is it for kids? Grown-ups? Girls? Boys? People who share a particular interest? Why do you think that?

 

What techniques are being used to make this message credible or believable? Does it have statistics from a reputable source? Does it contain quotes from a subject expert? Does it have an authoritative-sounding voice-over? Is there direct evidence of the assertions its making? Why do you think that?

 

What details were left out, and why? Is the information balanced with different views -- or does it present only one side? Do you need more information to fully understand the message? Why do you think that?

 

How did the message make you feel? Do you think others might feel the same way? Would everyone feel the same, or would certain people disagree with you? Why do you think that?

_ _ _ _ _

   Shifting this back to the need for educators to be Media and Product Literate, I would like you re-read the above Common Sense Media quotes, substituting the word “Educators” for “students” and “kids.” As you do this, think about a time when you are reviewing and evaluating, for purchase, the marketing materials or an on-line review of a curriculum, on-line instructional or intervention product, assessment or evaluation tool, professional development program, or direct and indirect consultation service.

   Hopefully, enough said.

   Like a media illiterate Middle School student (with no disrespect intended), how many times do we fall prey to product hyperbole, misinformation, and barely perceptible deception. . . and how often should we ask colleagues with more sophisticated psychometric, technical, or evaluative skills for an independent product appraisal?

_ _ _ _ _ _ _ _ _ _

Analyzing Some Product Literacy Language

   One of the “slogans” in educational research-to-practice involves the terms used to suggest that a curriculum, on-line instructional or intervention product, assessment or evaluation tool, professional development program, or direct and indirect consultation services has been validated.

   The most common terms are: “Scientifically-based,” “Evidence-based,” and “Research-based.”

   Because these terms are often thrown into the marketing descriptions of certain products, it is important for educators to understand (a) their definitions, histories, and how they differ; and (b) the questions and objective criteria needed to determine that a product validly provides the student, staff, or school outcomes that it asserts (and that one or more of the terms above can be used accurately).

   Relative to history and definitions, we need to consider the Elementary and Secondary Education Act of 2001 (No Child Left Behind—ESEA/NCLB) and 2015 (Every Student Succeeds Act—ESEA/ESSA); and ESEA’s current “brother”—the Individuals with Disabilities Education Act (IDEA 2004).

   Let’s go term by term.

_ _ _ _ _

Scientifically Based

   This term appeared in ESEA/NCLB 2001 twenty-eight times, and it was (at that time) the “go-to” definition in federal education law when discussing how to evaluate the efficacy, for example, of research or programs that states, districts, and schools needed to implement as part of their school and schooling processes.

   Significantly, this term was defined in the law.  According to ESEA/NCLB:

The term scientifically based research—

 

(A) means research that involves the application of rigorous, systematic, and objective procedures to obtain reliable and valid knowledge relevant to education activities and programs; and

 

(B) includes research that—

 

(i) employs systematic, empirical methods that draw on observation or experiment;

 

(ii) involves rigorous data analyses that are adequate to test the stated hypotheses and justify the general conclusions drawn;

 

(iii) relies on measurements or observational methods that provide reliable and valid data across evaluators and observers, across multiple measurements and observations, and across studies by the same or different investigators;

 

(iv) is evaluated using experimental or quasi-experimental designs in which individuals, entities, programs, or activities are assigned to different conditions and with appropriate controls to evaluate the effects of the condition of interest, with a preference for random-assignment experiments, or other designs to the extent that those designs contain within-condition or across-condition controls;

 

(v) ensures that experimental studies are presented in sufficient detail and clarity to allow for replication or, at a minimum, offer the opportunity to build systematically on their findings; and

 

(vi) has been accepted by a peer-reviewed journal or approved by a panel of independent experts through a comparably rigorous, objective, and scientific review.

_ _ _ _ _

   The term “scientifically based” is found in IDEA 2004 twenty-five times—mostly when describing “scientifically based research, technical assistance, instruction, or intervention.”

   The term “scientifically based” is found in ESEA/ESSA 2015 ONLY four times—mostly as “scientifically based research.”  This term appears to have been replaced by the term “evidence-based” (see below) as the “standard” that ESEA/ESSA wants used when programs or interventions are evaluated for their effectiveness.

_ _ _ _ _

Evidence-Based

   This term DID NOT APPEAR in either ESEA/NCLB 2001 or IDEA 2004.

   However, it does appear in ESEA/ESSA 2015—sixty-three times (!!!). . . most often when describing “evidence-based research, technical assistance, professional development, programs, methods, instruction, or intervention.”

   Moreover, as the new (and current) “go-to” standard when determining whether programs or interventions have been empirically demonstrated as effective, ESEA/ESSA 2105 defines this term.

   As such, according to ESEA/ESSA 2015:

(A) IN GENERAL.—Except as provided in subparagraph (B), the term ‘evidence-based’, when used with respect to a State, local educational agency, or school activity, means an activity, strategy, or intervention that

 

   ‘(i) demonstrates a statistically significant effect on improving student outcomes or other relevant outcomes based on—

 

      ‘(I) strong evidence from at least 1 well-designed and well-implemented experimental study;

 

   ‘(II) moderate evidence from at least 1 well-designed and well-implemented quasi-experimental study; or

 

      ‘(III) promising evidence from at least 1 well-designed and well-implemented correlational study with statistical controls for selection bias; or

 

   ‘(ii)(I) demonstrates a rationale based on high-quality research findings or positive evaluation that such activity, strategy, or intervention is likely to improve student outcomes or other relevant outcomes; and

 

     ‘(II) includes ongoing efforts to examine the effects of such activity, strategy, or intervention.”

 

(B) DEFINITION FOR SPECIFIC ACTIVITIES FUNDED UNDER THIS ACT.—When used with respect to interventions or improvement activities or strategies funded under Section 1003 [School Improvement], the term ‘evidence-based’ means a State, local educational agency, or school activity, strategy, or intervention that meets the requirements of subclause (I), (II), or (III) of subparagraph (A)(i).

_ _ _ _ _

Research-Based

   This term appeared in five times in ESEA/NCLB 2001; it appears four times in IDEA 2004; and it appears once in ESEA/ESSA 2015.  When it appears, the term is largely used to describe programs that need to be implemented by schools to support student learning.

   Significantly, the term researched-based is NOT defined in either ESEA law (2001, 2015), or by IDEA 2004.

_ _ _ _ _ _ _ _ _ _

What You Should Know and Ask When Evaluating Programs Using these Terms?

Scientifically Based

   At this point in 2024, if a product developer uses the term “scientifically based,” s/he probably doesn’t know that this term has functionally been eliminated as the “go-to” term in federal education law. At the same time, as an informed consumer, you can still ask the developer what s/he means by “scientifically based.” 

   Then. . . if the developer continues to say that his/her product is scientifically based, you request and review the validating research studies—that are preferably published in refereed journals—with descriptions of their:

·       Demographic backgrounds and other characteristics of the students participating in the studies (so you can compare and contrast these students to your students);

·       Research methods used in the studies (so you can validate that the methods were sound, objective, and that they involved control or comparison groups not receiving the program or intervention);

·       Outcomes measured and reported in the studies (so you can validate that the research was focused on student outcomes, and especially the student outcomes that you are most interested in for your students);

·       Data collection tools, instruments, or processes used in the studies (so that you are assured that they were psychometrically reliable, valid, and objective—such that the data collected and reported are demonstrated to be accurate;

·       Treatment or implementation integrity methods and data reported in the studies (so you can objectively determine that the program or intervention was implemented as it was designed, and in ways that make sense);

·       Data analysis procedures used in the studies (so you can validate that the data-based outcomes reported were based on the “right” statistical and analytic approaches);

·       Interpretations and conclusions reported by the studies [so you can objectively validate that these summarizations are supported by the data reported, and have not been inaccurately- or over-interpreted by the author(s)]; and the

·       Limitations reported in the studies (so you understand the inherent weaknesses in the studies, and can assess whether these weaknesses affected the integrity of and conclusions—relative to the efficacy of the programs or interventions—drawn by the studies).

_ _ _ _ _

Evidence-Based

   If a product developer describes a program or intervention as “evidence-based,” you need to ask them whether they are using the term as defined in ESEA/ESSA 2015 (see above) and, if so, which criteria in the law their product has met.

   Critically, very few educational products or psychological interventions meet the (“Gold Standard”) experimental or quasi-experimental criteria in the Law. In fact, if at all, most will only meet the following ESEA/ESSA 2015 criteria:

   ‘(ii)(I) demonstrates a rationale based on high-quality research findings or positive evaluation that such activity, strategy, or intervention is likely to improve student outcomes or other relevant outcomes; and

 

     ‘(II) includes ongoing efforts to examine the effects of such activity, strategy, or intervention.”

   As such, as an informed consumer, we suggest that you ask the product developer all of the same questions outlined in the “scientifically based” section immediately above. The answers will help you determine the objective efficacy of the product, the demographics of the students it has worked with, and the resource, time, and training needs for success.

_ _ _ _

Research-Based

   If a product developer uses the term “research-based,” they probably don’t know that the “go-to” term, definition, and standard is now “evidence-based” in federal education law. 

   Moreover, as an informed consumer, a developer’s use of the “research-based” term should raise some “red flags” as it might suggest that the quality of the product’s research and its efficacy may be suspect.

   As such, you will need to ask the developer the same questions in the “scientifically based” section above, independently evaluating the quality of the responses.

   After (a) analyzing the information from the product’s research and implementation studies, and (b) answering the evaluative questions, you can ask yourself:

·       Is there enough objective information to conclude that the “recommended” product is independently responsible for the student outcomes that are purported and reported?

_ _ _ _ _

·       Is there enough objective data to demonstrate that the “recommended” product is appropriate for my student population, and will potentially result in the same positive and expected outcomes (if present)?

[The point here is that the program or intervention may be effective—but only with certain students. . . and not your students.]

_ _ _ _ _

·       Will the resources needed to implement the program be time- and cost-effective relative to the “Return-on-Investment”? 

[These resources include, for example, the initial and long-term cost for materials, professional development time, specialized personnel, coaching and supervision, evaluation, parent and community outreach, etc.]

_ _ _ _ _ 

·       Will the “recommended” product be acceptable to those involved (e.g., students, staff, administrators, parents) such that they are motivated to implement it with integrity and over an extended period of time? 

[There is extensive research on the “acceptability” of interventions, and the characteristics or variables that make program or intervention implementation likely or not likely.]

_ _ _ _ _ _ _ _ _ _

Some Final Product Literacy Questions

   Clearly, some products and interventions have sound research that validate their practices. As an inherent part of this validation, these products have been implemented and evaluated with intensity and integrity, and they have produced meaningful and measurable student, staff, and/or school outcomes.

   But even here, recognize that—in analyzing the responses to the evaluative questions suggested throughout this Blog:

·       Some products or interventions will not have demonstrable efficacy; 

·       Some will have some positive outcomes, but they may be over-generalized by the developer so that the product appears more successful than it really is;

·       Some will have positive correlational results, but not the causal results that demonstrate that the product was solely responsible for the positive outcomes; 

·       Some will have demonstrated efficacy—but not be applicable to your students or circumstances; and 

·       Some product developers will still claim—even in the face of your objective analysis regarding the (suspect or poor) quality of the research and the compromised efficacy of the product—that it is effective.

   In this last situation, research is typically compromised when it is conducted (a) by convenience; (b) with small, non-representative, and non-random samples; (c) without comparisons or matched control groups; and (d) using methodologically unsound scientific approaches. 

   Other “published” research needs to be reviewed when it appears on a website or in a journal that does not conduct “blind” reviews by three or more members of an established Editorial Board.

   PLEASE NOTE: Anyone can do their own research, pay $50.00 to set up a website, and begin to market their products. To determine if the research is sound, the product generates the results it purports, and the same results will meaningfully occur in your school, agency, or setting, you need to do your own investigations, analyses, and due diligence.

_ _ _ _ _ _ _ _ _ _

Summary

   Metaphorically using the commercials at the Super Bowl as a guide, this Blog emphasized and outlined (applying the goals and questions within a sound middle school Media Literacy program) why educators need to be both Media and Product Literate when reviewing and evaluating the marketing materials or on-line reviews of curricula, instructional or intervention products, assessment or evaluation tools, professional development programs, or direct and indirect consultation services that may be purchased. 

   We then described in detail three common terms used to “validate” these products: “Scientifically-based,” “Evidence-based,” and “Research-based.” 

   Here, we asserted the importance that educators understand these terms’ respective (a) definitions, histories, and differences; and (b) the questions and objective criteria needed to determine that a product can validly provide the student, staff, or school outcomes that it asserts.

   We understand that Social Media and Product Literacy—and their accompanying reviews—take time.

   But especially for purchases that will be used or implemented for five or more years (e.g., a new reading, math, or science curriculum or on-line program; a new district Student Information or Data Management System), the review time is both responsible and essential to long-term student, staff, and school success.

   A January 19, 2024 Education Week article (late last month) discussed the “Five Mistakes for Educators to Avoid When Picking ‘Evidence-Based’ Programs.”

   I highly recommend that you read and discuss this article in your educational setting. Indeed, if you understand the thrust and nuances in this piece, you will realize that you already have the Product Literacy foundation that you need to competently evaluate new products or interventions.

_ _ _ _ _

Answers to the “Slogan Pop Quiz

   Oh. . . by the way. . . here are the answers to our earlier “Slogan Pop Quiz”:

·       “I can’t believe I ate the whole thing.” [Alka Seltzer]

·       “I bet you can’t eat just one.” [Lay’s Potato Chips]

·       “Where’s the beef?” [Wendy’s]

·       “You deserve a break today.” [McDonald’s]

·       “Put a tiger in your tank.” [Exxon]

·       “Cleans like a white tornado.” [Ajax]

·       “What happens in Vegas, stays in Vegas.” [Las Vegas]

·       “America runs on Dunkin’.” [Dunkin’ Donuts]

   You’re welcome!

_ _ _ _ _

A Funding Opportunity

   My Friends: A lot of my school and district consultation work is funded by (often, five-year) federal grants from the U.S. Department of Education that I write for and with the districts who are interested in implementing my work.

   A new $4 million grant program is coming up in a few months that needs a single moderate to large school district with at least 25 elementary schools.

   As we can submit multiple grants from different districts, if you are interested in discussing this grant and a partnership with me, call (813-495-3318) or drop me an e-mail as soon as possible (howieknoff1@projectachieve.info).

   Another five-year $4 million grant program will likely be announced a year from now. This program will be open to districts of all sizes. If you are interested, once again, it is not too early to talk.

   BOTH grant programs focus on (a) school safety, climate, and discipline; (b) classroom relationships, behavior management, and engagement; and (c) teaching students interpersonal, conflict prevention and resolution, social problem-solving, and emotional awareness, control, communication, and coping skills and interactions.

   Beyond these grants, if you are interested in my work for your educational setting, I am happy to provide a free consultation for you and your team to discuss needs, current status, goals, and possible approaches.

   Again, call me or drop me an e-mail, and let’s get started.

Best,

Howie

 

[CLICK HERE to read this Blog on the Project ACHIEVE Webpage]