Saturday, August 26, 2017

The Top Ten Ways that Educators Make Bad, Large-Scale Programmatic Decisions (Part I of III)


The Hazards of ESEA/ESSA’s Freedom and Flexibility at the State and Local Levels
  
Dear Colleagues,

Introduction

   As we plan in earnest for the full implementation of the Elementary and Secondary Education Act/Every Student Succeeds Act (ESEA/ESSA), the following “truths” have become self-evident:

   * There is going to be an incredible amount of variability—relative to school and teacher effectiveness, student standards and assessments, multi-tiered academic and behavioral interventions, and school “success” and student “proficiency”—across our states and districts . . . than ever before.

   * States, districts, and schools will be more responsible for selecting their own approaches relative to curriculum, instruction, assessment, intervention, and evaluation . . . than ever before.

   And because of this:

   * The impact of frequently-changing superintendents and school administrators, inequitable school staffing and teacher shortages, and a focus on the type of school (e.g., public versus charter schools) rather than the quality of a school . . . will result in more student gaps and failures than ever before.
_ _ _ _ _

   This is a good thing and a bad thing.

   It is a good thing, because many of the U.S. Department of Education’s preferred and pushed No Child Left Behind frameworks (e.g., School Improvement, PBIS, RtI/MTSS, Reading First/Literacy) did not work, and yet, were funded (and continue to be funded relative to PBIS and MTSS) by billions (that’s with a “B”) of your taxpayer dollars.

   [See—as an example—my June 3rd Blog, Effective School-wide Discipline Approaches: Avoiding Educational Bandwagons that Promise the Moon, Frustrate Staff, and Potentially Harm Students . . . CLICK HERE]

   In essence, across two administrations—one Republican (Bush II) and one Democratic (Obama)—the U.S. Department of Education failed us, and—in their arrogance that they knew best how to educate all students—they left us all behind.
_ _ _ _ _

   It is a bad thing because, as alluded to above, our districts and schools—who have enough to do with the day-to-day education of our students—will need to make more curriculum, instruction, intervention, and evaluation decisions than ever before. 

   And, I fear, they are not prepared to make these decisions in scientifically, psychometrically, methodologically, and contextually-sound ways.
_ _ _ _ _

   It’s not that our educators across the country are trying to be ineffective. 

   It is just that they do not have the time, people, and resources to be MORE effective, and they often do not know what they do not know. 

   That is, some educators do not have the sophisticated scientific, psychometric, or methodological in-house expertise to make some critical decisions.  And so, they go out-house to the “experts”—some of whom are expert more in marketing themselves, than in recommending or providing true evidence-based, sustainable outcomes.
_ _ _ _ _ _ _ _ _ _

The “Top Ten” Ways that Educators Make Bad Programmatic Decisions

   As a practitioner who has worked in the schools, at two Research I universities, within a state department of education, and as a consultant in every state in the country over 35+ years, I find that educators make important, large-scale (i.e., at the state, district, or school levels) programmatic (curriculum, instruction, intervention, or evaluation) decisions in ways that are incredibly flawed.

   And these flawed decisions waste time, money, resources, and energy . . . and they often progressively result in frustrated and resistant staff, and disengaged and negatively-impacted students.

   I bring these flaws to your consciousness so that we can all recognize and eliminate these flaws in the future.

   At the same time, I fully recognize that, sometimes, a flawed decision actually works.  (Remember that even low-probability-of-success events sometimes are successful!  Someone is going to win the lottery—against all odds!)

   PLEASE NOTE ALSO that the “personal labels” associated with each Reason below are not intended to offend anyone . . . I just want you to think about the implications—to students, colleagues, and schools—of each flaw.
_ _ _ _ _

   And so:  Here—with brief commentary—are the “David Letterman Top Ten Reasons” why educators make (sometimes flawed) programmatic decisions.

Reason #1: “Because I Know Best” (The “Autocrat”)

   The Flaw:  These are bad decisions made by the educators “in control” of the District or its schools.  They are autocratic leaders who make largely unilateral decisions (a) because they “know everything;” (b) they “can;” (c) it’s politically expedient; (d) because of who “has their ear;” or (e) due to some related factor or influence.
_ _ _ _ _

Reason #2: “Because My Colleague Says It Works” (The “Daydream Believer”)

   The Flaw:  These are bad decisions made by educators who depend on the testimonials of “trusted colleagues” who attest that a specific program “works” because either (a) they have tried it; or (b) one of their “trusted colleagues” has recommended it. 

   Often, these “trusted colleagues” have not themselves objectively validated the program’s efficacy—based on sound research or actual implementation.  More critically, for the educator making the decision for their District or school, it appears that “blind trust” has “carried the day.”
_ _ _ _ _

Reason #3: “Because It’s On-Line” (The “Connected One”)

   The Flaw:  These are bad decisions made by educators who believe that anything posted on-line (by the media, marketers, actual researchers, or others) is true—simply because it is on-line.  They assume that when something goes “viral”—the “likes,” “retweets,” and “shares” attest to the validity of the program.
_ _ _ _ _

Reason #4: “Because It’s Free” (The “Bargain Basement Boss”)

   The Flaw:  These are bad decisions made by educators who embrace a program because it is provided “at no cost” to the District or its schools.  They fail to understand that a “free” program that does not work or actually impedes or implodes the progress of the District has significant negative costs—to students, staff, and schools.

   As alluded to above, the U.S. Department of Education (as well as many state departments of education, and many non-profit Foundations) have “pushed” their preferred programs down to districts and schools by providing “free” grants and training.  Given restricted and tight budgets, many “accept” the training—assuming that these “experts” know what they are doing (see Reasons #7 and 8).

   Relative to the departments of education, these programs are not free—they use your taxpayer dollars.  Relative to the Foundations, we need to be aware that social, political, or other agendas may be embedded in the grants.  Relative to all, we need to independently validate the programs before we “cash the checks.”

   Critically:  We need to get away from the “If it’s free, it’s for me” mentality in education.
_ _ _ _ _

Reason #5: “Because the Committee Recommended It” (The “Failed Consensus-Builder”)

   The Flaw:  These are bad decisions made by the educators “in control” of the District or its schools based on the recommendations of a formally-constituted or informally-constituted committee or task force.

   The “benign” flaw here is when the Educator embraces the programmatic recommendation assuming that the committee has done its “due diligence,” and that “it knows best.”

   The “damaging” flaw is when the Educator accepts a committee’s recommendation (a) knowing that it is flawed, but (b) is afraid of the functional or political consequences of overturning it.
_ _ _ _ _

Reason #6: “Because a National Expert Recommended It” (The “Groupie”)

   The Flaw:  These are bad decisions made by educators because they were recommended by a “national expert”—an author, presenter, program developer, or consultant—who is either working independently, or working for a company, educational resource center, state or federal department of education, etc.

   While, clearly, some experts are “true” experts, well-intended, and their recommendations are sound, decision-making educators still need to independently validate their recommendations, and—even if sound—determine whether they can be applied to the students, staff, and schools in question.

   Other “experts”— who advocate untested, unreliable, non-transferable, or invalid programs—run the spectrum from those who truly believe that they are doing good, to those who are out to do good only for themselves.

   The bottom line?  Decision-makers are accountable for their decisions, NOT the recommendations from an expert.
_ _ _ _ _

Reason #7: "Because It’s Developed by a Non-Profit" (The “Do-Gooder”)

   The Flaw:  These are bad decisions made by educators who believe that programs developed and disseminated by non-profit agencies, organizations, or foundations “must be good” because they come from groups with pure, altruistic, or selfless intentions or motivations.

   While this may be true, this is not necessarily true.  The reality, with all due respect to my non-profit colleagues, is that non-profits are businesses that need to make money to stay in business. 

   Simplistically, the differences between for-profits and non-profits are that the latter are legally-bound to use their profits in specific, IRS-controlled ways, and they are legally-restricted (depending on their categorization) to certain, limited (or NO) political/lobbying activities.

   And so . . . a program from a non-profit should be objectively analyzed and vetted in the same way as a program from a for-profit organization or company.

   Significantly, I do not want educators to become cynical relative to non-profit agencies or foundations.  However, I do want them to remain vigilant and accountable to the students and staff whom they serve.
_ _ _ _ _

Reason #8: "Because It’s Federally- or State-Recommended" (The “Enabler”)

   Studies report that up to 35% of school districts nationwide implement department of education programs without any independent analysis or asking any questions.

   The Flaw:  These are bad decisions made by educators because they believe that their federal and state departments of education (a) are working in everyone’s best educational interests; and that (b) they have objectively analyzed the outcome data of the programs they are recommending (using their own required “Gold Standard” approach); (c) they would not recommend a program that does not have a high probability of success; and (d) they have “no dog in the fight.”

   Moreover, some educators make these programmatic decisions believing that adopting a federally- or state-recommended program is a safe bet—or, at least, that they will "be protected" regardless of the outcomes.

   Wake up, America!

   Let’s remember:  That only a third of the districts adopting one of the billion-dollar-supported federal School Improvement models (under No Child Left Behind) had any positive results . . . And that the districts—that did not make progress under the federally-required options—were still held accountable for their failing outcomes.

   And, remember that:  Less than 10 years ago . . . the Bush administration’s billion-dollar Reading First Program was unfunded by Congress because officials in the U.S. Department of Education were found guilty of (a) favoring specific reading programs, while eliminating a number of exceptional programs from consideration; (b) “stacking” the grant review panels with members biased toward the favored programs; and (c) changing state grant proposals after they were reviewed so that they could be funded with the favored programs.

   And while you can say that “this is all in the past” . . . it is not.  The federal law (ESEA/ESSA and IDEA) discusses or requires districts to provide “positive behavioral interventions and supports” and “multi-tiered systems of supports” (written in these laws in lower case and without acronyms). 

   And yet, the U.S. Department of Education, through most state departments of education, continues to singularly advocate their federally-funded (for years) and preferred PBIS and MTSS frameworks, respectively.  In fact, the Department-funded national Technical Assistance Centers for these frameworks continually misquote the federal laws to make it appear that their UPPER CASE approaches are legally mandated (through their lower case appearances in federal law).

   Relative to Reason #8, the “Enabling” educator making decisions in this area often is simply ignoring the “elephant in the room.”
_ _ _ _ _

Reason #9: "Because It’s Federally- or State-Mandated" (The “Abdicator”)

   The Flaw:  These are bad decisions made by educators who passively comply with federal and state department of education programs that represent these departments’ interpretation or operationalization of federal (e.g., ESEA, IDEA) and/or state laws or statutes. 

   Instead of abdicating and implementing programs that everyone knows will not positively impact students, staff, and schools, these educators should (a) question the programs and their efficacy; (b) recommend a proven, alternative program that has a higher probability of attaining the desired results; and/or (c) request and defend the need for a waiver.

   Unfortunately, educators sometimes abdicate their responsibilities in this area because they feel they can deflect the responsibility for a failed program “because it was mandated.”  At other times, they comply for fear of retribution or retaliation—for example, receiving undue scrutiny or sanctions during state department of education audits or compliance visits.

   While this is not fair. . . it is real.
_ _ _ _ _

Reason #10: "Because It’s “Research-based” (The “Mad Scientist”)

   The Flaw:  These are bad decisions made by educators who review the “research” that appears to support a specific program, but who do not understand the difference between (a) methodologically sound versus unsound research; (b) research that measures perceptions versus research that objectively measures definitive and replicable outcomes; and (c) research that “works” in a perfectly controlled vacuum versus research that truly works in the real world.

   See . . . it’s not about a program’s association with research.  It is about the reliability, validity, and generalizability of the research.

   More specifically, some “research” is done (a) by convenience; (b) with small, non-representative, and non-random samples; (c) without comparisons to matched “control groups;” and (d) in scientifically unsound ways. 

   Here, educational decision-makers need to understand (or hire other professionals who understand) that this “research” has a low probability of succeeding when scaled-up in their district.

   Moreover, some “research” has not been independently, objectively, or “blindly” reviewed by three or more experts in the field (as when someone publishes their work in a “refereed” professional journal). 

   Relative to this latter point, educators need to understand that—even when studies are published in a refereed journal—that some research is published because article submissions are low, and the publisher cannot “stop the presses.”  Said a different way, journals need to be published in order to attract annual subscriptions and stay in business.

   The ultimate point here is that educational decision-makers should not be “Mad Scientists”—assuming that “being published” correlates with “good research.”  They need to be “Discriminating Consumers” who recognize sound versus unsound research, and the importance of choosing only the former for programmatic implementation.
_ _ _ _ _ _ _ _ _ _

Summary and Coming Attractions

   As noted in the Introduction, the Elementary and Secondary Education Act/Every Student Succeeds Act (ESEA/ESSA) gives states, districts, and schools more freedom, flexibility, and responsibility for selecting their own approaches to curriculum, instruction, assessment, intervention, and evaluation.

   By describing the “Top Ten Ways that Educators Make Bad, Large-Scale Programmatic Decisions,” the hope is that school, district, state, and other educational leaders avoid these hazards, and make their decisions in empirically and functionally sound ways.

   There are some great, evidence-based approaches available to schools across the country.  Some of them, however, are getting lost in the marketing, promotion, publishing, and competitive “noise” that invades our professional lives each day.

   Mark Millar (who, believe it or not, writes comic books for a living) said:

   “Organizations who win, think deeply, choose wisely, and act decisively.”

   Educational decision-makers need to think deeply about the true needs of their students, staff, and schools—identifying what is working and needs to be maintained, as well as the gaps that exist, why they exist, and how they are going to be closed.

   They then need to choose wisely from the services, supports, programs, and interventions that are available, and that have demonstrated their ability—from research to practice—to close the gaps.  Here, they need to avoid the Top Ten approaches above—sometimes choosing to take “the road less traveled.”

   Finally, they need to act decisively—making sure that the training, resources, and mentoring needed for successful implementation is available and sustained.

   If we do all this, the “permission” granted by ESEA/ESSA will result in the “promise” of better academic and social, emotional, and behavioral outcomes for all students.
_ _ _ _ _ _ _ _ _ _

   Once again, the goal of this Blog was not to stereotype, offend, or blame anyone relative to their educational decision-making processes.  Instead, the goal was to recognize, characterize, and memorialize some of the ways that poor decisions are made—and to emphasize that these decisions have real and sometimes long-lasting impact on students, staff, and schools.

   In Part II of this Blog message (in two weeks), we will continue to look at ways to understand sound versus unsound research.  In this message, we will look at the definitions, histories, and functional aspects of the terms "scientifically based," "evidence-based," and "research-based" specifically in the ESEA and IDEA federal laws  We will also discuss the critical questions that educational leaders should ask when a researcher or practitioner (recommending or endorsing a specific program or intervention) says that that program or intervention has research demonstrating its efficacy.

   Part III of this Blog is tentatively titled, Hattie’s Meta-Analysis Madness:  The Method is Missing.  Obviously, this Blog will discuss the "in's and out's" of a meta-analysis, and what educational leaders need to know about Hattie's work and conclusions.  Stay tuned.
_ _ _ _ _

   As always, I look forward to your comments. . . whether on-line or via e-mail.

   And—with the new school year now upon us:  If I can help you in any of the school improvement, school discipline and behavioral intervention, or multi-tiered service and support areas where I specialize, please do not hesitate to contact me.

   I am always happy to provide a free one-hour consultation conference call to help you clarify your needs and directions on behalf of your students, staff/colleagues, school(s), and district.

   Welcome back!  Make it a GREAT YEAR !!!

Best,

Howie

No comments:

Post a Comment