Saturday, September 9, 2017

“Scientifically based” versus “Evidence-based” versus “Research-based”—Oh, my!!! (Part II of III)



Making Effective Programmatic Decisions:  Why You Need to Know the History and Questions Behind these Terms


Dear Colleagues,

Introduction

   This (now) three-part series is focusing on how states, districts, schools, and educational leaders make decisions regarding what services, supports, programs, curricula, instruction, strategies, and interventions to implement in their classrooms.  Recognizing that we need to use programs that have documented efficacy and the highest probability of implementation success, it has nonetheless been my experience that many programs are chosen “for all the wrong reasons”—to the detriment of students, staff, and schools.

   In Part I of this series [CLICK HERE], I noted that:

·       Beyond the policy-level requirements in the newly-implemented Elementary and Secondary Education/Every Student Succeeds Act (ESEA/ESSA), the Act transfers virtually all of the effective school and schooling decisions, procedures, and practices away from the U.S. Department of Education, and into the “hands” of the respective state departments of education and their state’s districts and schools.

·       Because of this “transfer of responsibility,” states, districts, and schools will be more responsible (and accountable) for selecting their own approaches to curriculum, instruction, assessment, intervention, and evaluation than ever before.

·       This will result in significant variability—across states and districts—in how they define school “success” and student progress, measure school and teacher effectiveness, apply assessments to track students’ standards-based knowledge and proficiency, and implement multi-tiered academic and behavioral services and interventions for students.

   All of this means that districts and schools will have more freedom—but greater responsibility—to evaluate, select, and implement their own ways of functionally addressing all students’ academic and social, emotional, and behavioral learning and instructional needs—across a multi-tiered continuum that extends from core instruction to strategic response and intensive intervention.

   This “local responsibility” is bolstered by the fact that, while ESEA/ESSA discusses districts’ need to implement “multi-tiered systems of supports” and “positive behavioral interventions and supports,” these terms are written in the law in lower case and without the presence of any acronyms. 

   Thus, the U.S. Department of Education’s strong advocacy (and largely singular funding) of the PBIS and MTSS frameworks that they created are not mandated by ESEA/ESSA or in any other federal law (such as IDEA).

   In other words, districts and schools are completely free to establish their own multi-tiered systems of supports and positive behavioral interventions and support systems so long as they are consistent with law, empirically defensible, and result in sustainable student outcomes.
_ _ _ _ _

Revisiting the “Top Ten Ways that Educational Leaders Make Flawed, Large-Scale Programmatic Decisions

   Part I of this series [CLICK HERE] then discussed the fact that, while districts and schools will have more ESEA/ESSA responsibility and self-determination, they may not all be prepared to make the decisions that they have to make in scientifically, psychometrically, methodologically, and contextually-sound ways.

   This is not to suggest that educators are trying to be ineffective.  It is just that they do not have the time, people, and resources to be MORE effective, and they often do not know what they do not know. 

   The Blog then described the “Top Ten” reasons why educational leaders make flawed large-scale, programmatic decisions—that waste time, money, and resources; and that frustrate and cause staff and student resistance and disengagement.

   The flawed Reasons discussed were:

1.   The Autocrat (I Know Best)
2.   The Daydream Believer (My Colleague Says It Works)
3.   The Connected One (It’s On-Line)
4.   The Bargain Basement Boss (If it’s Free, It’s for Me)
5.   The Consensus-Builder (But the Committee Recommended It)
6.   The Groupie (But a National Expert Recommended It)
7.   The Do-Gooder (It’s Developed by a Non-Profit)
8.   The Enabler (It’s Federally or State-Recommended)
9.   The Abdicator (It’s Federally or State-Mandated)
10.   The Mad Scientist (It’s Research-based)

   By self-reflecting on these flawed approaches, the hope is that educational leaders will avoid these hazards, and make their district- or school-wide programmatic decisions in more effective ways.
_ _ _ _ _

   In Part III (in two weeks), we will specifically look at what a meta-analysis is and is not—highlighting the work of John Hattie.

   In this Part II Blog, we will discuss #10 (It’s Research-based) in more depth.  Specifically, we will differentiate among three terms that are bandied around when evaluating the efficacy of programs, interventions, and other district-wide or school-wide strategies. 

   In all, we will leave you with the critical questions that need to be asked when objectively evaluating programs being considered for district-wide or school-wide implementation. . . all so that you can make sound programmatic decisions.
_ _ _ _ _ _ _ _ _

“Scientifically based” versus “Evidence-based” versus “Research-based”

   As I provide consultation services to school districts across the country (and world), I continually hear people using three related terms to describe their practice—or their selection of specific services, supports, instruction, strategies, programs, or interventions.

   The terms are “scientifically-based,” “evidence-based,” and “research-based” . . . and many educators seem to be using them interchangeably.

   And so—because these terms are critical to understanding how to objectively evaluate the quality of a program or intervention being considered for implementation, I provide a brief history (and their definitions, when present) of these terms below. 

   As this series is focusing on the Elementary and Secondary Education Act (ESEA), I will restrict this brief analysis to (a) the 2001 version of ESEA (No Child Left Behind; ESEA/NCLB); (b) the current 2015 version of ESEA (Every Student Succeeds Act; ESEA/ESSA); and (c) ESEA’s current “brother”—the Individuals with Disabilities Education Act (IDEA 2004).
_ _ _ _ _

Scientifically Based

   This term appeared in ESEA/NCLB 2001 twenty-eight times, and it was (at that time) the “go-to” definition in federal education law when discussing how to evaluate the efficacy, for example, of research or programs that states, districts, and schools needed to implement as part of their school and schooling processes.

   Significantly, this term was defined in the law.  According to ESEA/NCLB:

The term scientifically based research—

(A) means research that involves the application of rigorous, systematic, and objective procedures to obtain reliable and valid knowledge relevant to education activities and programs; and

(B) includes research that—

(i) employs systematic, empirical methods that draw on observation or experiment;

(ii) involves rigorous data analyses that are adequate to test the stated hypotheses and justify the general conclusions drawn;

(iii) relies on measurements or observational methods that provide reliable and valid data across evaluators and observers, across multiple measurements and observations, and across studies by the same or different investigators;

(iv) is evaluated using experimental or quasi-experimental designs in which individuals, entities, programs, or activities are assigned to different conditions and with appropriate controls to evaluate the effects of the condition of interest, with a preference for random-assignment experiments, or other designs to the extent that those designs contain within-condition or across-condition controls;

(v) ensures that experimental studies are presented in sufficient detail and clarity to allow for replication or, at a minimum, offer the opportunity to build systematically on their findings; and

(vi) has been accepted by a peer-reviewed journal or approved by a panel of independent experts through a comparably rigorous, objective, and scientific review.
_ _ _ _ _

   The term “scientifically based” is found in IDEA 2004 twenty-five times—mostly when describing “scientifically based research, technical assistance, instruction, or intervention.”

   The term “scientifically based” is found in ESEA/ESSA 2015 ONLY four times—mostly as “scientifically based research.”  This term appears to have been replaced by the term “evidence-based” (see below) as the “standard” that ESEA/ESSA wants used when programs or interventions are evaluated for their effectiveness.
_ _ _ _ _

Evidence-Based

   This term DID NOT APPEAR in either ESEA/NCLB 2001 or IDEA 2004.

   It DOES appear in ESEA/ESSA 2015—sixty-three times (!!!) most often when describing “evidence-based research, technical assistance, professional development, programs, methods, instruction, or intervention.”

   Moreover, as the new (and current) “go-to” standard when determining whether programs or interventions have been empirically demonstrated as effective, ESEA/ESSA 2105 defines this term.

   As such, according to ESEA/ESSA 2015:

(A) IN GENERAL.—Except as provided in subparagraph (B), the term ‘evidence-based’, when used with respect to a State, local educational agency, or school activity, means an activity, strategy, or intervention that

   ‘(i) demonstrates a statistically significant effect on improving student outcomes or other relevant outcomes based on—

      ‘(I) strong evidence from at least 1 well-designed and well-implemented experimental study;

      ‘(II) moderate evidence from at least 1 well-designed and well-implemented quasi-experimental study; or

      ‘(III) promising evidence from at least 1 well-designed and well-implemented correlational study with statistical controls for selection bias; or

   ‘(ii)(I) demonstrates a rationale based on high-quality research findings or positive evaluation that such activity, strategy, or intervention is likely to improve student outcomes or other relevant outcomes; and

      ‘(II) includes ongoing efforts to examine the effects of such activity, strategy, or intervention.”

(B) DEFINITION FOR SPECIFIC ACTIVITIES FUNDED UNDER THIS ACT.—When used with respect to interventions or improvement activities or strategies funded under Section 1003 [School Improvement], the term ‘evidence-based’ means a State, local educational agency, or school activity, strategy, or intervention that meets the requirements of subclause (I), (II), or (III) of subparagraph (A)(i).
_ _ _ _ _

Research-Based

   This term appeared in five times in ESEA/NCLB 2001; it appears four times in IDEA 2004; and it appears once in ESEA/ESSA 2015.  When it appears, the term largely used to describe programs that need to be implemented by schools to support student learning.

   Significantly, the term researched-based is NOT define in either ESEA law (2001, 2015), or by IDEA 2004.
_ _ _ _ _ _ _ _ _ _

What Should You Know and Ask When Programs Use these Terms?

Scientifically Based

   At this point, if someone uses the term “scientifically based,” they probably don’t know that this term has functionally been expunged as the “go-to” standard in federal education law. 

   At the same time, as an informed consumer, you can still ask what the researcher or practitioner means by “scientifically based.”  Then—if the practitioner is recommending a specific program, and endorsing it as “scientifically based,” ask for (preferably refereed) studies and their descriptions of the:

   * Demographic backgrounds and other characteristics of the students participating in the studies (so you can compare and contrast these students to your students);

   * Research methods used in the studies (so you can validate that the methods were sound, objective, and that they involved control or comparison groups not receiving the program or intervention);

   * Outcomes measured and reported in the studies (so you can validate that the research was focused on student outcomes, and especially the student outcomes that you are most interested in for your students);

   * Data collection tools, instruments, or processes used in the studies (so that you are assured that they were psychometrically reliable, valid, and objectivesuch that the data collected and reported are demonstrated to be accurate

   * Treatment or implementation integrity methods and data reported in the studies (so you can objectively determine that the program or intervention was implemented as it was designed, and in ways that make sense);

   * Data analysis procedures used in the studies (so you can validate that the data-based outcomes reported were based on the “right” statistical and analytic approaches);

   * Interpretations and conclusions reported by the studies [so you can objectively validate that these summarizations are supported by the data reported, and have not been inaccurately- or over-interpreted by the author(s)]; and the

   * Limitations reported in the studies (so you understand the inherent weaknesses in the studies, and can assess whether these weaknesses affected the integrity of and conclusions—relative to the efficacy of the programs or interventions—drawn by the studies).
_ _ _ _ _

Evidence-Based

   Moving on:  If a researcher or practitioner describes a program or intervention as “evidence-based” you need to ask them whether they are using the term as defined in ESEA/ESSA 2015 (see above).

   Beyond this, we need to recognize that—relatively speaking—there are far fewer educational programs or psychological interventions used in schools that meet the experimental or quasi-experimental criteria in the Law.

   Thus, it would be wise to assume that most educational programs or psychological interventions will be considered “evidence-based” because of these components in the Law:

   ‘(ii)(I) demonstrates a rationale based on high-quality research findings or positive evaluation that such activity, strategy, or intervention is likely to improve student outcomes or other relevant outcomes; and

      ‘(II) includes ongoing efforts to examine the effects of such activity, strategy, or intervention.”


   As such, as an informed consumer, you need to ask the researcher or practitioner (and evaluate the responses to) all of the same questions as outlined above for the “scientifically based” research assertions.
_ _ _ _

Research-Based

   In essence, if a research or practitioner uses the term “research-based,” they probably don’t know that the “go-to” term, standard, and definition in federal education law is “evidence-based.” 

   At the same time, as an informed consumer, a researcher or practitioner’s use of the “research-based” term should raise some “red flags”—as it might suggest that the quality of the research supposedly validating the recommended program or intervention is suspect.

   Regardless, as an informed consumer, you will still ask the researcher or practitioner (and evaluate the responses to) all of the same questions as outlined above for the “scientifically based” research assertions.

   Ultimately, after (a) collecting the information from the studies supposedly supporting a specific program or intervention, and (b) answering all the questions above, you need to determine the following:

   * Is there enough objective information to conclude that the “recommended” program or intervention is independently responsible for the student outcomes that are purported and reported?

   * Is there enough objective data to demonstrate that the “recommended” program or intervention is appropriate for MY student population, and will potentially result in the same positive and expected outcomes?

   [The point here is that the program or intervention may be effective—but only with certain students. . . and not YOUR students.]


   * Will the resources needed to implementation the program be time- and cost-effective relative to the “Return-on-Investment”?

   [These resources include, for example, the initial and long-term cost for materials, professional development time, specialized personnel, coaching and supervision, evaluation, parent and community outreach, etc.]


   * Will the “recommended” program or intervention be acceptable to those involved (e.g., students, staff, administrators, parents) such that they are motivated to implement it with integrity and over an extended period of time?

   [There is extensive research on the “acceptability” of interventions, and the characteristics or variables that make program or intervention implementation likely or not likely.]
_ _ _ _ _

Additional Cautions Regarding Research

   Clearly, research has validated some programs, interventions, and/or strategies.  As an inherent part of this validation, the programs have been implemented and evaluated with intensity and integrity, and they have been meaningfully applied to address specific student, staff, and school outcomes.

   But. . . in answering many of the questions posed throughout this Blog:

   * Some programs or interventions will not have demonstrable efficacy;

   * Some will demonstrate their efficacy—but not be applicable to YOUR students or situations; and

   * Some will claim efficacy, but the research is NOT sound, or the (favorable) conclusions are not warranted by the research.


   Indeed, poor quality research typically was completed (a) by convenience; (b) with small, non-representative, and non-random samples; (c) without comparisons to matched “control groups;” and (d) in scientifically unsound ways.  Moreover, some of the “research” was not independently, objectively, or “blindly” reviewed (as when someone publishes their work in a “refereed” professional journal) by three or more experts in the field.

   When research is not sound, it is usually because:

   * The “researchers” are more interested in “marketing, influence, fame, or fortune” and their “research” really doesn’t even qualify as legitimate research [this “science” is pseudoscience]

   * The researchers are simply not knowledgeable or skilled in conducting sound research [this science ranges from clumsy to inept]

   * The researchers do not have the resources to conduct the complexity or sophisticated level of the research needed [this science is ranges from ill-advised to well-intended]


   When research is not appropriately applied, it is usually because:

   * The researchers have interpreted (or recommend the use of) their results in ways that go well beyond the intent of their original research, or the people, problems, or parameters involved in that research

   * The researchers have confused or represented correlational results as causal results, and implementing schools or districts have accepted the (false) belief that, for example, “research has proven that this program will directly and exclusively solve this problem”

   * The implementing schools or districts do not have the skills or capacity to independently evaluate the research, and they mistakenly (or wishfully) conclude that, for example, a specific program will work “with our students, in our settings, with our staff and resources, given our current problems and desired outcomes”—even though that program has never been tested or validated under those circumstance
_ _ _ _ _

 PLEASE NOTE:  Anyone can do their own research, pay $50.00 to establish a website, and begin to market their products.  To determine if the research is sound, the program produces the results it says it does, and the same results will meaningfully transfer into your school, agency, or setting, YOU need to do your own investigation, analysis, and due diligence.

   Too many programs (as noted above), are purchased because of someone else’s personal experience and testimony, their “popularity” and marketing, due to a “celebrity” endorsement, or because they are “easy” to implement.

   Once again, educational programs and psychological interventions (as well as instruction, curricula, services, strategies, etc.) need to be evidence-based.  And, we need to use this term as defined and operationalized in ESEA/ESSA 2015.
_ _ _ _ _ _ _ _ _ _

Summary

   I understand that all of this takes time.  At the same time, I know that districts invest this time every time that they choose a new reading or math or science program.

   The questions are:  Are we using our time effectively?  Are we asking the questions and collecting the information that will help us to identify the best program for our students, our staff, and our schools?  And, are we prepared to use the data objectively so that the best choice is made?
_ _ _ _ _

   I hope you that found this Blog—and Part I [CLICK HERE] helpful and meaningful to your work.

   I always look forward to your comments. . . whether on-line or via e-mail.

   I hope that your school year has started successful.  To those in the greater Houston area and across Florida, we are thinking about you.

   If I can help you in any of the areas discussed in this and other school improvement-focused Blog messages, know that I am always happy to provide a free one-hour consultation conference call to help you clarify your needs and directions on behalf of your students, staff/colleagues, school(s), and district.

   In Part III (in two weeks) of this series, we will specifically look at what a meta-analysis is and is not—highlighting the work of John Hattie.

   Have a great next two weeks !!!

Best,

Howie


Saturday, August 26, 2017

The Top Ten Ways that Educators Make Bad, Large-Scale Programmatic Decisions (Part I of III)


The Hazards of ESEA/ESSA’s Freedom and Flexibility at the State and Local Levels
  
Dear Colleagues,

Introduction

   As we plan in earnest for the full implementation of the Elementary and Secondary Education Act/Every Student Succeeds Act (ESEA/ESSA), the following “truths” have become self-evident:

   * There is going to be an incredible amount of variability—relative to school and teacher effectiveness, student standards and assessments, multi-tiered academic and behavioral interventions, and school “success” and student “proficiency”—across our states and districts . . . than ever before.

   * States, districts, and schools will be more responsible for selecting their own approaches relative to curriculum, instruction, assessment, intervention, and evaluation . . . than ever before.

   And because of this:

   * The impact of frequently-changing superintendents and school administrators, inequitable school staffing and teacher shortages, and a focus on the type of school (e.g., public versus charter schools) rather than the quality of a school . . . will result in more student gaps and failures than ever before.
_ _ _ _ _

   This is a good thing and a bad thing.

   It is a good thing, because many of the U.S. Department of Education’s preferred and pushed No Child Left Behind frameworks (e.g., School Improvement, PBIS, RtI/MTSS, Reading First/Literacy) did not work, and yet, were funded (and continue to be funded relative to PBIS and MTSS) by billions (that’s with a “B”) of your taxpayer dollars.

   [See—as an example—my June 3rd Blog, Effective School-wide Discipline Approaches: Avoiding Educational Bandwagons that Promise the Moon, Frustrate Staff, and Potentially Harm Students . . . CLICK HERE]

   In essence, across two administrations—one Republican (Bush II) and one Democratic (Obama)—the U.S. Department of Education failed us, and—in their arrogance that they knew best how to educate all students—they left us all behind.
_ _ _ _ _

   It is a bad thing because, as alluded to above, our districts and schools—who have enough to do with the day-to-day education of our students—will need to make more curriculum, instruction, intervention, and evaluation decisions than ever before. 

   And, I fear, they are not prepared to make these decisions in scientifically, psychometrically, methodologically, and contextually-sound ways.
_ _ _ _ _

   It’s not that our educators across the country are trying to be ineffective. 

   It is just that they do not have the time, people, and resources to be MORE effective, and they often do not know what they do not know. 

   That is, some educators do not have the sophisticated scientific, psychometric, or methodological in-house expertise to make some critical decisions.  And so, they go out-house to the “experts”—some of whom are expert more in marketing themselves, than in recommending or providing true evidence-based, sustainable outcomes.
_ _ _ _ _ _ _ _ _ _

The “Top Ten” Ways that Educators Make Bad Programmatic Decisions

   As a practitioner who has worked in the schools, at two Research I universities, within a state department of education, and as a consultant in every state in the country over 35+ years, I find that educators make important, large-scale (i.e., at the state, district, or school levels) programmatic (curriculum, instruction, intervention, or evaluation) decisions in ways that are incredibly flawed.

   And these flawed decisions waste time, money, resources, and energy . . . and they often progressively result in frustrated and resistant staff, and disengaged and negatively-impacted students.

   I bring these flaws to your consciousness so that we can all recognize and eliminate these flaws in the future.

   At the same time, I fully recognize that, sometimes, a flawed decision actually works.  (Remember that even low-probability-of-success events sometimes are successful!  Someone is going to win the lottery—against all odds!)

   PLEASE NOTE ALSO that the “personal labels” associated with each Reason below are not intended to offend anyone . . . I just want you to think about the implications—to students, colleagues, and schools—of each flaw.
_ _ _ _ _

   And so:  Here—with brief commentary—are the “David Letterman Top Ten Reasons” why educators make (sometimes flawed) programmatic decisions.

Reason #1: “Because I Know Best” (The “Autocrat”)

   The Flaw:  These are bad decisions made by the educators “in control” of the District or its schools.  They are autocratic leaders who make largely unilateral decisions (a) because they “know everything;” (b) they “can;” (c) it’s politically expedient; (d) because of who “has their ear;” or (e) due to some related factor or influence.
_ _ _ _ _

Reason #2: “Because My Colleague Says It Works” (The “Daydream Believer”)

   The Flaw:  These are bad decisions made by educators who depend on the testimonials of “trusted colleagues” who attest that a specific program “works” because either (a) they have tried it; or (b) one of their “trusted colleagues” has recommended it. 

   Often, these “trusted colleagues” have not themselves objectively validated the program’s efficacy—based on sound research or actual implementation.  More critically, for the educator making the decision for their District or school, it appears that “blind trust” has “carried the day.”
_ _ _ _ _

Reason #3: “Because It’s On-Line” (The “Connected One”)

   The Flaw:  These are bad decisions made by educators who believe that anything posted on-line (by the media, marketers, actual researchers, or others) is true—simply because it is on-line.  They assume that when something goes “viral”—the “likes,” “retweets,” and “shares” attest to the validity of the program.
_ _ _ _ _

Reason #4: “Because It’s Free” (The “Bargain Basement Boss”)

   The Flaw:  These are bad decisions made by educators who embrace a program because it is provided “at no cost” to the District or its schools.  They fail to understand that a “free” program that does not work or actually impedes or implodes the progress of the District has significant negative costs—to students, staff, and schools.

   As alluded to above, the U.S. Department of Education (as well as many state departments of education, and many non-profit Foundations) have “pushed” their preferred programs down to districts and schools by providing “free” grants and training.  Given restricted and tight budgets, many “accept” the training—assuming that these “experts” know what they are doing (see Reasons #7 and 8).

   Relative to the departments of education, these programs are not free—they use your taxpayer dollars.  Relative to the Foundations, we need to be aware that social, political, or other agendas may be embedded in the grants.  Relative to all, we need to independently validate the programs before we “cash the checks.”

   Critically:  We need to get away from the “If it’s free, it’s for me” mentality in education.
_ _ _ _ _

Reason #5: “Because the Committee Recommended It” (The “Failed Consensus-Builder”)

   The Flaw:  These are bad decisions made by the educators “in control” of the District or its schools based on the recommendations of a formally-constituted or informally-constituted committee or task force.

   The “benign” flaw here is when the Educator embraces the programmatic recommendation assuming that the committee has done its “due diligence,” and that “it knows best.”

   The “damaging” flaw is when the Educator accepts a committee’s recommendation (a) knowing that it is flawed, but (b) is afraid of the functional or political consequences of overturning it.
_ _ _ _ _

Reason #6: “Because a National Expert Recommended It” (The “Groupie”)

   The Flaw:  These are bad decisions made by educators because they were recommended by a “national expert”—an author, presenter, program developer, or consultant—who is either working independently, or working for a company, educational resource center, state or federal department of education, etc.

   While, clearly, some experts are “true” experts, well-intended, and their recommendations are sound, decision-making educators still need to independently validate their recommendations, and—even if sound—determine whether they can be applied to the students, staff, and schools in question.

   Other “experts”— who advocate untested, unreliable, non-transferable, or invalid programs—run the spectrum from those who truly believe that they are doing good, to those who are out to do good only for themselves.

   The bottom line?  Decision-makers are accountable for their decisions, NOT the recommendations from an expert.
_ _ _ _ _

Reason #7: "Because It’s Developed by a Non-Profit" (The “Do-Gooder”)

   The Flaw:  These are bad decisions made by educators who believe that programs developed and disseminated by non-profit agencies, organizations, or foundations “must be good” because they come from groups with pure, altruistic, or selfless intentions or motivations.

   While this may be true, this is not necessarily true.  The reality, with all due respect to my non-profit colleagues, is that non-profits are businesses that need to make money to stay in business. 

   Simplistically, the differences between for-profits and non-profits are that the latter are legally-bound to use their profits in specific, IRS-controlled ways, and they are legally-restricted (depending on their categorization) to certain, limited (or NO) political/lobbying activities.

   And so . . . a program from a non-profit should be objectively analyzed and vetted in the same way as a program from a for-profit organization or company.

   Significantly, I do not want educators to become cynical relative to non-profit agencies or foundations.  However, I do want them to remain vigilant and accountable to the students and staff whom they serve.
_ _ _ _ _

Reason #8: "Because It’s Federally- or State-Recommended" (The “Enabler”)

   Studies report that up to 35% of school districts nationwide implement department of education programs without any independent analysis or asking any questions.

   The Flaw:  These are bad decisions made by educators because they believe that their federal and state departments of education (a) are working in everyone’s best educational interests; and that (b) they have objectively analyzed the outcome data of the programs they are recommending (using their own required “Gold Standard” approach); (c) they would not recommend a program that does not have a high probability of success; and (d) they have “no dog in the fight.”

   Moreover, some educators make these programmatic decisions believing that adopting a federally- or state-recommended program is a safe bet—or, at least, that they will "be protected" regardless of the outcomes.

   Wake up, America!

   Let’s remember:  That only a third of the districts adopting one of the billion-dollar-supported federal School Improvement models (under No Child Left Behind) had any positive results . . . And that the districts—that did not make progress under the federally-required options—were still held accountable for their failing outcomes.

   And, remember that:  Less than 10 years ago . . . the Bush administration’s billion-dollar Reading First Program was unfunded by Congress because officials in the U.S. Department of Education were found guilty of (a) favoring specific reading programs, while eliminating a number of exceptional programs from consideration; (b) “stacking” the grant review panels with members biased toward the favored programs; and (c) changing state grant proposals after they were reviewed so that they could be funded with the favored programs.

   And while you can say that “this is all in the past” . . . it is not.  The federal law (ESEA/ESSA and IDEA) discusses or requires districts to provide “positive behavioral interventions and supports” and “multi-tiered systems of supports” (written in these laws in lower case and without acronyms). 

   And yet, the U.S. Department of Education, through most state departments of education, continues to singularly advocate their federally-funded (for years) and preferred PBIS and MTSS frameworks, respectively.  In fact, the Department-funded national Technical Assistance Centers for these frameworks continually misquote the federal laws to make it appear that their UPPER CASE approaches are legally mandated (through their lower case appearances in federal law).

   Relative to Reason #8, the “Enabling” educator making decisions in this area often is simply ignoring the “elephant in the room.”
_ _ _ _ _

Reason #9: "Because It’s Federally- or State-Mandated" (The “Abdicator”)

   The Flaw:  These are bad decisions made by educators who passively comply with federal and state department of education programs that represent these departments’ interpretation or operationalization of federal (e.g., ESEA, IDEA) and/or state laws or statutes. 

   Instead of abdicating and implementing programs that everyone knows will not positively impact students, staff, and schools, these educators should (a) question the programs and their efficacy; (b) recommend a proven, alternative program that has a higher probability of attaining the desired results; and/or (c) request and defend the need for a waiver.

   Unfortunately, educators sometimes abdicate their responsibilities in this area because they feel they can deflect the responsibility for a failed program “because it was mandated.”  At other times, they comply for fear of retribution or retaliation—for example, receiving undue scrutiny or sanctions during state department of education audits or compliance visits.

   While this is not fair. . . it is real.
_ _ _ _ _

Reason #10: "Because It’s “Research-based” (The “Mad Scientist”)

   The Flaw:  These are bad decisions made by educators who review the “research” that appears to support a specific program, but who do not understand the difference between (a) methodologically sound versus unsound research; (b) research that measures perceptions versus research that objectively measures definitive and replicable outcomes; and (c) research that “works” in a perfectly controlled vacuum versus research that truly works in the real world.

   See . . . it’s not about a program’s association with research.  It is about the reliability, validity, and generalizability of the research.

   More specifically, some “research” is done (a) by convenience; (b) with small, non-representative, and non-random samples; (c) without comparisons to matched “control groups;” and (d) in scientifically unsound ways. 

   Here, educational decision-makers need to understand (or hire other professionals who understand) that this “research” has a low probability of succeeding when scaled-up in their district.

   Moreover, some “research” has not been independently, objectively, or “blindly” reviewed by three or more experts in the field (as when someone publishes their work in a “refereed” professional journal). 

   Relative to this latter point, educators need to understand that—even when studies are published in a refereed journal—that some research is published because article submissions are low, and the publisher cannot “stop the presses.”  Said a different way, journals need to be published in order to attract annual subscriptions and stay in business.

   The ultimate point here is that educational decision-makers should not be “Mad Scientists”—assuming that “being published” correlates with “good research.”  They need to be “Discriminating Consumers” who recognize sound versus unsound research, and the importance of choosing only the former for programmatic implementation.
_ _ _ _ _ _ _ _ _ _

Summary and Coming Attractions

   As noted in the Introduction, the Elementary and Secondary Education Act/Every Student Succeeds Act (ESEA/ESSA) gives states, districts, and schools more freedom, flexibility, and responsibility for selecting their own approaches to curriculum, instruction, assessment, intervention, and evaluation.

   By describing the “Top Ten Ways that Educators Make Bad, Large-Scale Programmatic Decisions,” the hope is that school, district, state, and other educational leaders avoid these hazards, and make their decisions in empirically and functionally sound ways.

   There are some great, evidence-based approaches available to schools across the country.  Some of them, however, are getting lost in the marketing, promotion, publishing, and competitive “noise” that invades our professional lives each day.

   Mark Millar (who, believe it or not, writes comic books for a living) said:

   “Organizations who win, think deeply, choose wisely, and act decisively.”

   Educational decision-makers need to think deeply about the true needs of their students, staff, and schools—identifying what is working and needs to be maintained, as well as the gaps that exist, why they exist, and how they are going to be closed.

   They then need to choose wisely from the services, supports, programs, and interventions that are available, and that have demonstrated their ability—from research to practice—to close the gaps.  Here, they need to avoid the Top Ten approaches above—sometimes choosing to take “the road less traveled.”

   Finally, they need to act decisively—making sure that the training, resources, and mentoring needed for successful implementation is available and sustained.

   If we do all this, the “permission” granted by ESEA/ESSA will result in the “promise” of better academic and social, emotional, and behavioral outcomes for all students.
_ _ _ _ _ _ _ _ _ _

   Once again, the goal of this Blog was not to stereotype, offend, or blame anyone relative to their educational decision-making processes.  Instead, the goal was to recognize, characterize, and memorialize some of the ways that poor decisions are made—and to emphasize that these decisions have real and sometimes long-lasting impact on students, staff, and schools.

   In Part II of this Blog message (in two weeks), we will continue to look at ways to understand sound versus unsound research.  In this message, we will look at the definitions, histories, and functional aspects of the terms "scientifically based," "evidence-based," and "research-based" specifically in the ESEA and IDEA federal laws  We will also discuss the critical questions that educational leaders should ask when a researcher or practitioner (recommending or endorsing a specific program or intervention) says that that program or intervention has research demonstrating its efficacy.

   Part III of this Blog is tentatively titled, Hattie’s Meta-Analysis Madness:  The Method is Missing.  Obviously, this Blog will discuss the "in's and out's" of a meta-analysis, and what educational leaders need to know about Hattie's work and conclusions.  Stay tuned.
_ _ _ _ _

   As always, I look forward to your comments. . . whether on-line or via e-mail.

   And—with the new school year now upon us:  If I can help you in any of the school improvement, school discipline and behavioral intervention, or multi-tiered service and support areas where I specialize, please do not hesitate to contact me.

   I am always happy to provide a free one-hour consultation conference call to help you clarify your needs and directions on behalf of your students, staff/colleagues, school(s), and district.

   Welcome back!  Make it a GREAT YEAR !!!

Best,

Howie