Saturday, March 30, 2024

How Cognitive Biases Affect Student Perceptions and Educator Decisions

Making the Unconscious, Conscious and the Implicit, Explicit

[CLICK HERE to read this Blog on the Project ACHIEVE Webpage]


Dear Colleagues,

Introduction

   Not to validate a long-standing stereotype, but as a school psychologist who tries to facilitate positive growth and change, I often think about how people form their attitudes and beliefs, and how these impact their perceptions and decisions.

   This typically occurs when I am consulting in a school. . . advocating for the most effective research-to-practice approaches for students, staff, school, and system success.

   Many times, there are many potentially successful ways to attain desired school or educational goals and outcomes. As this occurs, we weigh the strengths and limits of the most-viable ways and make a collaborative decision.

   During this process, we sometimes “agree to disagree” and, in the end, we accept the consensus decision. . . even though it may not be the one that was our favorite.

_ _ _ _ _

   At times, though, the “collaborative” process is not collaborative. Indeed, sometimes there are self-serving agendas, discussions that are not constructive, and decisions not based on objective information, research, facts, or fairness.

   This sometimes occurs because of implicit or unconscious biases, or explicit and conscious biases. When these biases interfere with or undermine sound decision-making, their presence and impact need to be identified, acknowledged, analyzed, and addressed.

   While I understand the personal, professional, and political challenges in this recommendation, let’s first spend some time understanding some of the most-prevalent biases and how they impact attitudes, beliefs, perceptions, and the decision-making process in educational settings.

_ _ _ _ _ _ _ _ _ _

The “Offspring” of Cognitive Biases: Stereotypes, Prejudices, and Discrimination

[Acknowledgement: Numerous websites were consulted to complete the research for this Blog. The most useful (and used below) were: Simplypsychology.org; Verywellmind.com; and Health.ClevelandClinic.org.]

   Cognitive Biases are based on our experiences, interactions, beliefs, and lived events. They help us to process information quickly, they are contextual, and they range from accurate to inaccurate, and helpful to harmful.

   Cognitive Biases can impact our attitudes, beliefs, expectations, attributions, and interpretations, as well as what we think or believe, with whom we interact, and how we interact.

   As noted earlier, there are times when we are unaware of our cognitive biases as they are unconscious or implicit.

   Conversely, there are times when we are fully aware of our cognitive biases and how they are impacting our emotions, thoughts, behavior, and decisions.

   In a negative sense, cognitive biases can be implicitly or explicitly inaccurate, and they can lead us to misperceive information, events, and specific groups of people or individuals. These biases lead to irrational thoughts, judgements, decisions, and interactions.

_ _ _ _ _

   While there are different layers of agreement (and disagreement), cognitive biases overlap with stereotypes, prejudices, and discrimination.

   Simplistically, Stereotypes are beliefs. Prejudices are attitudes. And, Discrimination is a behavior.

   The open source Principles of Social Psychology (University of Minnesota, 2015) clarifies:

The principles of social psychology, including the ABCs—affect, behavior, and cognition—apply to the study of stereotyping, prejudice, and discrimination, and social psychologists have expended substantial research efforts studying these concepts.

 

The cognitive component in our perceptions of group members is the stereotypethe positive or negative beliefs that we hold about the characteristics of social groups. We may decide that “Italians are romantic,” that “old people are boring,” or that “college professors are nerds.” And we may use those beliefs to guide our actions toward people from those groups.

 

In addition to our stereotypes, we may also develop prejudicean unjustifiable negative attitude toward an outgroup or toward the members of that outgroup. Prejudice can take the form of disliking, anger, fear, disgust, discomfort, and even hatred—affective states that can lead to (inappropriate) behavior.

 

(Indeed,) stereotypes and prejudices are problematic because they may create discriminationunjustified negative behaviors toward members of outgroups based on their group membership.

_ _ _ _ _

   Expanding briefly, stereotypes and prejudices lead people to:

·       Assume that they have all of the information that they need—about a specific person or group, topic or subject matter, event or situation—thereby, stopping them from collecting all of the objective facts; 

·       Interpret any existing information in the direction of their bias—resulting in faulty opinions or conclusions; and 

·       Generalize these faulty opinions or conclusions from individual (e.g., a student from a specific racial group who is behaving badly) to a prescribed group (believing that all individuals from that racial group behave badly), and/or from a prescribed group (all of the students in Period 7 who are failing science) to an individual (who just moved into the district and was put into 7th period science).

_ _ _ _ _ _ _ _ _ _

Describing and Applying (to Education) Selected Cognitive Biases

    More than 180 different cognitive biases have been identified by psychologists and other behavioral scientists. Below, we describe eight important ones, applying them to school staff, students, and administrators in educational settings.

   The eight chosen cognitive biases are organized along a continuum from those that impact our decisions before we make them, as we make them, and after we make them.

   Cognitive Biases Before We Make a Decision:

·       The Halo Effect

·       The Anchoring Bias

   Cognitive Biases As We Make a Decision:

·       The Confirmation Bias

·       The False Consensus Effect

·       The Optimism Bias

      Cognitive Biases After We Make a Decision:

·       The Misinformation Bias

·       The Hindsight Bias

·       The Self-Serving Bias

   We recently discussed the broad area of Cognitive Biases on Education Talk Radio, hosted by Larry Jacobs. Our 35-minute interview is posted at the end of this Blog.

_ _ _ _ _

Bias 1. The Halo Effect

   A Halo Effect occurs when our initial overall or general impressions of a student, teacher, related services professional, administrator—or even school—positively or negatively affects our later impressions, interactions, or evaluations.

   For example, if we are put-off because our first impression of a school is that it is excessively dirty and run-down, we may then attend only to future information—relative to, for example, the students, staff, or administration—that re-confirms our bias.

   If our first interaction with a student is that s/he is loud, overly-dramatic, and “street-wise,” we may then interact with that student in an impatient, cold, and curt way—resulting in a negative reaction from the student that, as part of a vicious cycle, re-validates our initial bias.

   If our first impression of a candidate for a staff or administrative position is that they graduated from the “right” school, are attractive and well-dressed, and smile and make good eye contact, we may shorten the interview and ask only “softball” questions, give them the benefit of the doubt when they are unable to answer other questions, and rate them higher than other candidates who have actually more experience, but do not have the characteristics creating the positive bias.

_ _ _ _ _

Bias 2. The Anchoring Bias

   An Anchoring Bias occurs when people are influenced by the first piece of information that they hear, and that piece of information becomes the benchmark or “anchor” that all subsequent information is filtered through.

   For example, when staff are told that the administration wants to keep class sizes down to 18 or 20 students for the coming year (and the staff all applaud), that information becomes the benchmark for what eventually occurs. 

   If the anticipated state funding (which was the basis for the desired class size) does not materialize, and class sizes “explode” to 23 students, then staff are disappointed (or worse) and they may “take it out” on their administrators’ annual staff evaluation ratings.

   If the special educators in a school are told that—next year—the service delivery model will be modified to free them up for one day of consulting with their general education colleagues each week, then that becomes the “anchor” that they use to evaluate their satisfaction with their roles during coming year.

   If a rush of new students with disabilities appears over the summer, and the one day is reduced to a half-day, the staff may be so dissatisfied or disillusioned that they choose not to consult at all.

   If the students in a middle school are “warned” that the new administration will implement a no-nonsense discipline code for the coming year. . . they may refuse to listen to the administration’s collaborative student-staff philosophy during the first assembly when the new year actually begins.

   Moreover, their negatively-anchored bias may continue such that discipline offenses significantly increase during the first month of school, causing the administration to respond more punitively. . . thereby “validating” the students’ original inaccurate perceptions.

_ _ _ _ _

Bias 3. The Confirmation Bias

   A Confirmation Bias occurs when we already have a negative or inaccurate attitude or belief about something or someone, and we selectively choose or weigh the available subsequent information in ways that will only confirm our bias. In other words, people with a Confirmation Bias filter or believe the information that reinforces what they already think or believe, while missing or ignoring the information that invalidates these same thoughts or beliefs.

   For example, if a part of a Reading or Intervention Selection Committee has a pre-conceived bias toward a specific curriculum or approach, they will only select the research or choose the favored-author-interview responses that confirm their favored curriculum or approach, even as they highlight the “weak” research and “unimpressive” competing-author-responses associated with their unfavored approach(es).

   If an unpopular and out-spoken—but usually correct and instructionally effective—staff member is going up for tenure, and an administrator has been biased by the “colleagues” who want her fired, the administrator might unconsciously give low ratings to this teacher during his Classroom Walk-throughs... thereby confirming the negative bias.

   If student morale, attendance, engagement, and participation in a high school is low because the disengaged students—who make up the majority of the student body—legitimately see the administration and faculty favoring the smart, successful, and economically-advantaged students—who are in the minority. . . then (a) the administration and faculty’s bias would be confirmed if they interviewed only the favored students—in an attempt to change the climate of the school—who then blamed the disengaged students for the low morale; and (b) the disengaged students’ bias—for example, that the administration and faculty would never listen to them anyways—would be confirmed when they abdicated and did not ask them why they were not being interviewed and involved.

_ _ _ _ _

Bias 4. The False Consensus Effect

   A False Consensus Effect occurs when people overestimate how much others agree with their values, beliefs, attitudes, interpretations, and behaviors.

   For example, in a faculty meeting about curriculum, instruction, policy, and/or practice, a building principal may assume that her beliefs regarding what should occur will be so accepted that she short-circuits the discussion and prematurely puts “the solution” on the floor. When a large number of staff reject the solution and accuse the principal of “not listening” and “railroading” her position, the principal realizes the impact of the False Consensus Effect not only to the discussion and decision, but also to distrust that his faculty now have in her.

   In our politically-divided world, many school staff have learned that their assumptions as to how many colleagues agree with their professional and personal perspectives (e.g., on teaching certain historical or current event topics, on gender-neutral or trans-inclusive restrooms or pronouns) have not been as large as they thought.

   It is always interesting—for example, in a culturally-diverse school—to see how students over-estimate the number of peers that agree with them when discussing historically or socially complex events or situations.

   This is an early life lesson for students, and a leadership role for teachers: To help student avoid the False Consensus Effect “trap” by learning how to ask questions that elicit peers’ beliefs and attitudes before they make broad and definitive statements that are naïve, inadvertently offensive, and inconsistent with a broader reality that they are unaware of.

_ _ _ _ _

Bias 5. The Optimism Bias

   The Optimism Bias occurs when we overestimate the likelihood that “everything will turn out OK,” and underestimate the potential of negative events, outcomes, or interactions.

   This bias has no place in strategic planning—one reason why a SWOT analysis includes evaluations of Weaknesses and Threats, and it sometimes results in people not fully doing their research and “due diligence” which results in them not anticipating or planning for unexpected or even catastrophic events.

   The Optimism Bias was in full view during the early days of school shootings and student suicides when unprepared districts said, “It can’t happen here.”

   This Bias is evident when schools and staff believe that “everything will be OK” and that they will not be held accountable for the ineffective instruction that results in low grades and state proficiency scores, and not-ready-for-college-or-career student graduates.

   And this Bias is evident, conversely, when students have been “pumped up” with inflated grades and inappropriately optimistic feedback such that they believe that they are ready for the next test, academic school year, or post-graduation phase of their lives.

_ _ _ _ _

Bias 6. The Misinformation Effect

   The Misinformation Effect is the tendency to alter your recollection of what actually occurred during a specific event due to what occurred after it was over.

   This especially occurs when the event occurs under conditions of emotionality. . . resulting in experiential gaps that you feel “need to be filled in.” Or they occur when questions about the event are asked a certain way, or you talk with someone who had a different perspective of the same event.

   The Misinformation Effect occurs when different people have a different perspective of a school crisis, a student fight, or a playground accident. . . even though they were all there at the same time. As one person shares their observations, the others listening may modify what they were “sure” occurred to conform to the first person’s account.

   The Misinformation Effect occurs when an evaluation team observes a teacher during a classroom walk-through. Once again, as different team members later share their observations, others on the team may “second guess” their “discrepant” observations and modify them accordingly.

   Finally, the Misinformation Effect may occur when an accident occurs on a kindergarten playground, and the students are interviewed later to determine what happened. If an administrator asks, “Did Michael get pushed off the slide?”. . . the students might agree even though they did not see anyone do this.

   If the administrator asks, “Did you see with your eyes Michael get pushed off the slide?”. . . the students might say, “No, we only saw him on the ground after he fell.”

_ _ _ _ _

Bias 7. The Hindsight Bias

   The Hindsight Bias occurs when, after an event—even a random event—has concluded, people act as if the event was predestined or predictable and say “I knew that was going to happen.”

   This occurs when a school, administrator, or staff member is unsuccessful, and others conclude, “We knew s/he wasn’t going to work out.”

   It occurs when students are unsuccessful and/or they later drop-out, and staff conclude, “We knew he would be just like his older brother.”

   And it occurs when students are randomly assigned to a group for a lab or project, do poorly on the assignment, and later think, “I knew this group was not going to be able to work together and succeed.”

_ _ _ _ _

Bias 8. The Self-Serving Bias

   The Self-Serving Bias is a tendency to take credit (e.g., for being prepared, working hard, being flexible—when this is not true) when good things happen, and to externalize the blame (e.g., to students, technology, “bad luck,” or the lack of administrative support) when bad things occur.

   When underserved credit is taken, individuals do not learn what is really needed to be successful in the future, and—if the success was due to a group and not the individual taking credit—the group may feel disrespected and marginalized.

   When the responsibility for a poor outcome is externalized, individuals do not self-evaluate and change what needs to be changed, and they carry an inappropriate belief that others around them are incompetent or cannot be depended on.

   For example, when an unexpected percentage of students pass their State Proficiency Tests in a school. . . despite the inept leadership of the administration and the ineffective instruction of the teachers, both administrators and teachers may take the “credit,” and not change or improve their instructional approaches for the next year.

   When a less capable and motivated group of students do not pass their State Proficiency Tests the next year, the administrators and teachers might then “blame” these students. . . after all, “they got the same support and instruction as last year’s group.”

   When half of the students in a project-based learning group work competently on the project, and the other half “skate through,” there are times when the latter group takes (self-serving bias) credit for the good grade that was given to the entire group.

   When these latter students later fail the individually-administered test on the project’s content, these students may externalize the blame for the bad grade—ignoring the fact that their earlier poor participation and effort was the root cause.

_ _ _ _ _ _ _ _ _ _

Preventing or Overcoming Cognitive Biases

   There is no one “way” to prevent or overcome one or more cognitive biases.

   Sometimes they can be prevented or overcome by using objective implementation success and outcome criteria and protocols (or rubrics)—tools that help us to more judiciously self-pace, self-monitor, self-evaluate, self-correct, and—accurately—self-reinforce.

   Sometimes cognitive biases are prevented or overcome when we establish “courageous” collegial climates and group norms and expectations that allow them to be modified through positive and constructive feedback, critique, and recommendations for continuous improvement.

   Sometimes these group norms include giving colleagues permission to approach others to share their observations regarding possible implicit or unconscious biases or blind spots.

_ _ _ _ _

   None of this—especially the latter—is easy. Indeed, maintaining professional (and personal) objectivity, integrity, accountability, and productivity is hard work.

   But it is the responsible and necessary work—especially in education—that we must do on behalf of our students, parents, and communities.

   When asked how to recognize and overcome cognitive biases, Dr. Kia-Rai Prewitt—writing for the Cleveland Clinic—suggested:

Accept that we all have cognitive biases. Start by acknowledging that we all have biases. If you don’t acknowledge it or even see it as an issue, then you probably won’t be open to understanding someone else’s perspective or thinking about things differently.

 

Have experiences with a variety of people. Intentionally seek out conversations or opportunities to interact with people who have diverse backgrounds, ideas and ways of thinking can help. It’s important to hear how others might be approaching a situation.

 

Allow yourself cognitive flexibility. What does that mean? You want to consider the context before you interpret a situation or make a judgment about something. For example, someone who only sees things as black and white may not be considering other important information. Whereas, someone who has cognitive flexibility is able to see the gray area—that some things aren’t right or wrong, or this way or that way.

 

(When) help(ing) others identify and work on their own cognitive biases, tread carefully.

 

One of the things I suggest is to know your audience. Are they open to having a conversation? And even if they’re not, you can still use assertive communication. Sometimes, it’s not even saying that you agree. By just saying, ‘I understand’ or ‘I hear you’, you can help facilitate conversations with people who you may not agree with.

 

Overall, cognitive biases affect how you make decisions and can lead to difficulties in your career and personal life. But with practice, you can get better at recognizing when you may have a cognitive bias and how to change your perception of a situation. And, again, it’s important to recognize and accept that we all use cognitive bias.

 

Our brain is naturally wired to make sense of information. With all the information that is thrown at us at one time, we can only focus on certain things. And we use cognitive bias to help us process all that information. Unfortunately, sometimes, when we’re biased, we may be making an error in how we process that information.

_ _ _ _ _

   Expanding on Dr. Prewitt’s ideas, Host Larry Jacob interviewed me last week (March 25, 2024) as we discussed:

“How Cognitive Biases Affect Student Perceptions and Educator Decisions”

   Below is our 33-minute discussion:

 

_ _ _ _ _ _ _ _ _ _

Summary

   This Blog discussed how cognitive biases affect educators’ positive and negative perceptions of students, staff, and systems, and how they similarly impact judgments at the individual, grade, and school levels. We especially focused on how to understand, prevent, or address the implicit or unconscious biases or explicit and conscious biases that interfere with or undermine productive interactions and objective decision-making.

   Initially, we defined “Cognitive Bias,” and discussed their overlap with stereotypes, prejudices, and discrimination. We then described and provided examples for eight (out of the more than 180 different biases identified by behavioral scientists) of the most common cognitive biases in education, discussing how they functionally impact teachers, related services professionals, administrators, and students.

   The eight cognitive biases were organized along a continuum from those that impact decisions before we make them, as we make them, and after we make them.

   Cognitive Biases Before We Make a Decision:

·       The Halo Effect

·       The Anchoring Bias

   Cognitive Biases As We Make a Decision:

·       The Confirmation Bias

·       The False Consensus Effect

·       The Optimism Bias

      Cognitive Biases After We Make a Decision:

·       The Misinformation Bias

·       The Hindsight Bias

·       The Self-Serving Bias

   The Blog concluded by making a recent 33-minute Education Talk Radio interview—with Dr. Howie Knoff and Host Larry Jacobs—available to provide additional context and information available in this important area.

_ _ _ _

A New Funding Opportunity

   When districts or schools are interested in implementing my work—especially when funding is dwindling or short, I often partner with them and help them write (often, five-year) federal grants from the U.S. Department of Education.

   To this end:

   A new $4 million grant program is coming up in a few months that focuses on moderate to large school districts with at least 25 elementary schools.

   As we can submit multiple grants from different districts, if you are interested in discussing this grant and a partnership with me, call (813-495-3318) or drop me an e-mail as soon as possible (howieknoff1@projectachieve.info).

   A separate five-year $4 million grant program will likely be announced a year from now. This program is open to districts of all sizes.

   If you are interested, once again, it is not too early to talk.

   BOTH grant programs focus on (a) school safety, climate, and discipline; (b) classroom relationships, behavior management, and engagement; and (c) teaching students interpersonal, conflict prevention and resolution, social problem-solving, and emotional awareness, control, communication, and coping skills and interactions.

   If we partner, I will write the bulk of the Grant proposal (at no cost), and guide you through its submission.

   Beyond these grants, if you are interested in my work for your school or educational setting, I am happy to provide a free consultation for you and your team to discuss needs, current status, goals, and possible approaches.

   Call me or drop me an e-mail, and let’s get started.

Best,

Howie

[CLICK HERE to read this Blog on the Project ACHIEVE Webpage]

Saturday, March 16, 2024

Helping Schools Pick and Implement the Best Evidence-Based Programs (Part II)

Avoiding Mistakes, Best Practices, and Pilot Projects

[CLICK HERE to read this Blog on the Project ACHIEVE Webpage]

Dear Colleagues,

Introduction: Going Backwards to Move Forward

   Districts and schools are always in the process of “buying stuff.”

   They are constantly acquiring curricula, assessments, interventions, technology, professional development, consultants.

   These acquisitions should not be chosen randomly or based on testimonials or marketing promises describing “research” that is methodologically unsound and that does not demonstrate objective and consistently meaningful student, staff, and school outcomes.

_ _ _ _ _

   In Part I of this two-part series, we encouraged districts and schools to make objective, data-driven decisions in these areas, recommending specific definitions and standards. We used the commercials at the Super Bowl as a metaphorical guide.

February 24, 2024

What Super Bowl Commercials Teach Education About Media and Product Literacy: The Language and Process that Helps Schools Vet New Products and Interventions (Part I)

[CLICK HERE to Read and Review]

_ _ _ _ _

   This Blog emphasized and outlined—applying the goals and questions within a sound middle school Media Literacy program—why educators need to be both Media and Product Literate when reviewing and evaluating marketing materials or on-line reviews of curricula, instructional or intervention products, assessment or evaluation tools, professional development programs, or direct and indirect consultation services for purchase.

   We described in detail three common terms used to “validate” these products: “Scientifically-based,” “Evidence-based,” and “Research-based.”

   Here, we asserted the importance that educators understand these terms’ respective (a) definitions, histories, and differences; and (b) the questions and objective criteria needed to determine that a product can validly provide the student, staff, or school outcomes that it asserts.

   We understand that Social Media and Product Literacy—and their accompanying reviews—take time.

   But especially for purchases that will be used or implemented for five or more years (e.g., a new reading, math, or science curriculum or on-line program; a new district Student Information or Data Management System), the review time avoids costly mistakes, and is essential to long-term student, staff, and school success.

   At the end of the Blog Part I, we referenced a recent January 19, 2024 Education Week article that discussed the “Five Mistakes for Educators to Avoid When Picking ‘Evidence-Based’ Programs.”

   In this Blog Part II, we explore this article and its implications to further assist districts and schools before they acquire “new stuff.”

_ _ _ _ _ _ _ _ _ _

Providing Context to Move Forward

   As a national consultant, the selection and implementation of evidence-based programs and practices is a frequent concern as I help districts and schools across the country implement effective academic and social, emotional, and behavioral strategies for their students and staff.

   Two on-line sources of evidence-based programs are the What Works Clearinghouse for education, and the Evidence-Based Practices Resource Center for mental health.

   But, in the face of other ways to conduct sound research that validates different strategies and interventions, both Centers almost exclusively use their “Gold Standard” approach when designating programs or practices as evidence-based.

   This approach typically emphasizes the use of Randomized Control Trials (RCT) that demonstrate that a specific program or practice is causally (not correlationally) responsible for targeted student outcomes.

   In an RCT study, students are randomly assigned to either a Treatment Group (that receives the to-be-evaluated program or practice) or a Control or Comparison Group (that either does not receive the program or practice, or receives an innocuous “placebo” approach that is irrelevant to the targeted outcomes).

   My point here is not to get into a heavy discussion of educational research.

   My point is that—if the above description already has your head spinning, and you are responsible for selecting a strategy or intervention for your classroom, grade-level, department, or school—you may avoid the technical research and then choose the wrong intervention.

   Hence, the “five mistakes” from the Education Week article.

_ _ _ _ _ _ _ _ _ _

Mistakes to Avoid When Choosing Evidence-Based Programs

   The five mistakes that educators need to be mindful of when evaluating and choosing an evidence-based program, curriculum, or intervention are:

·       Equating Research Quality with Program Quality

·       Looking only at the summary (or rating)

·       Focusing too much on eect size

·       Forgetting whom the program serves

·       Taking ‘no eect for a conclusive answer

   To summarize:

   Even when a program, curriculum, or intervention meets the “gold standard” of research, this “designation” may say more about the quality of the research than the quality of the approach.

   This is because the research often does not tease out exactly why the approach was successful—especially when the program, curriculum, or intervention is complex and multi-faceted.

   Indeed, there may be elements of a program that are unsuccessful, but they may be masked by the statistically positive effects of another element that compensates for these faulty elements as the results are pooled.

   Given this, educators must look past the ways that, for example, the What Works Clearinghouse organizes the recommendations in its summaries:

·       For Individual Studies and Intervention Reports: Strong Evidence (Tier 1), Moderate Evidence (Tier 2), Promising Evidence (Tier 3), Uncertain Effects, and Negative Effects; and 

·       For Practice Guides: Strong Evidence (Tier 1), Moderate Evidence (Tier 2), Promising Evidence (Tier 3), or Evidence that Demonstrates a Rationale for a Recommendation (Tier 4). . .

and really read the study(ies) reviewed in a research report, or the methods described in a published research article.

_ _ _ _ _

   Educators must also understand what an effect size represents.

   One of the most common effect size calculations is Cohen’s d. Cohen suggests that a d = 0.2 is a “Small” effect size, a 0.5 d is a “Medium” effect size, and a 0.8 or greater is a “Large” effect size.

   But what does this mean?

   Statistically, a Small (0.2) effect size means that 58% of the Control Group in a study—on the scores or the ratings used to evaluate the program, curriculum, or intervention—fell below the Targeted, Participating, Treatment, Intervention, or Experimental Group.

   A Medium (0.5) effect size means that 69% of the Control Group in a study— on the scores or the ratings used to evaluate the program, curriculum, or intervention—fell below the Targeted, Participating, Treatment, Intervention, or Experimental Group.

  A Large (0.8) effect size means that 79% of the Control Group in a study—on the scores or the ratings used to evaluate the program, curriculum, or intervention—fell below the Targeted, Participating, Treatment, Intervention, or Experimental Group.

   Thus, even with a Large effect size, 21% (i.e., one out of every five students) of a Control Group—that did not participate in, for example, a new reading or social-emotional learning program—showed the same positive progress or response as the group of students who actually participated in the program.

   Critically, even with a 1.4 effect size, 8% of a Control Group demonstrated the same progress or response to a new program as the students who received that program.

_ _ _ _ _

   Moving on: Even when the research for a program, curriculum, or intervention is positive, educators still need to ask the following essential questions:

·       “Was this program, curriculum, or intervention validated for students, staff, and schools like mine”; 

·       “Do I have the time, resources, and support to implement this approach in my setting”; and 

·       “Does my setting’s circumstances (e.g., the need for immediate change because of a crisis situation) match those present in the approach’s validating research?”

_ _ _ _ _

   Finally, when a program, curriculum, or intervention was not validated, educators still need to read the research.

   As alluded to above, sometimes there are other research approaches that might validate the approach that are not preferred or accepted by the What Works Clearinghouse or the Evidence-Based Practices Resource Center.

_ _ _ _ _

   The “bottom line” in all of this is that educators must be committed to (a) objective and data-driven decisions relative to the new programs, curricula, or interventions that they need; (b) they need to understand the methodological and statistical elements that go into the research that have evaluated the approaches they are considering; (c) they need to ensure that the approaches are well-matched to their students, staff, and/or schools; and (d) they need to make sure that they have the time and resources needed to implement the finally-selected approach with integrity and its needed intensity.

_ _ _ _ _ _ _ _ _ _

Post-Script: Avoiding “Best Practices” and “Pilot Projects

   In a February 12, 2024 article in Fast Company, Keyanna Schmiedl explained “Why it’s time to stop saying ‘best practices’ in the business world.”

[CLICK HERE to Link to this Article]

   Discussing her preference for the term “Promising Practices” over “Best Practices,” she stated:

Language is crucial to leadership.

 

A single word or phrase can change the tone of an entire statement, and thus, the message employees take away from it. Those takeaways then develop into attitudes, which influence company culture and productivity.

 

Therein lies the issue with the term best practices. “Best” doesn’t leave room for flexibility and conversation. “Best” implies there’s only one solution or set of solutions to a problem, and that those solutions should remain unchallenged. And when you aren’t ready to challenge the status quo, you aren’t going to make any progress.

 

According to Salesforce, 86% of employees and executives believe a lack of collaboration or ineffective communication is the cause of workplace failures.

 

By adopting an ethos of promising practices—encouraging leaders to build with their employees, rather than simply instructing them on what they think is best—leaders can create the culture of collaboration and accountability needed to foster success.

 

(P)romising practices empower companies to lead with a mindset of humility and growth. Leaders can say, “This practice is hopeful. It brought good results for us, and we think it can bring good results for you, too.” Then, other organizations can take that baseline method and make it work for them.

 

Taking a holistic approach and incorporating the employee voice is what leads to more effective problem-solving, and therefore, the development of promising practices that work better for everyone.

_ _ _ _ _

   Schmiedl’s comments apply directly to districts, schools, and educational leaders.

   However, I recommend two important semantic changes as additional reasons to retire the term “Best Practices” in education.

   The first semantic change is to change Schmiedl’s “baseline method” term to “evidence-based blueprints.”

   In a science-to-practice context—and incorporating this Blog Series’ earlier discussions—I consistently describe the interdependent components that guide successful school change or improvement as existing within “evidence-based blueprints.” These blueprints cover, for example, strategic planning, differentiated instruction, the continuum of academic or social-emotional interventions, and multi-tiered systems of support.

   They are “evidence-based” because all of my work is (through U.S. Substance Abuse and Mental Health Services Administration—SAMHSA) or uses research-to-practice components that are field-proven. That is, across large numbers of schools in diverse settings across the country, objective evaluations have demonstrated our consistent and meaningful student, staff, and school outcomes.

   They are “blueprints” because, as above, they identify the essential interdependent components needed for successful implementation, but give schools the flexibility (a) to include complementary strategies that add depth and breadth; (b) to sequence their activities in strategic and student-need-driven ways; and (c) to align their professional development, coaching, and evaluation approaches to maximize existing resources and staff capabilities.

   The second semantic change—which still supports Schmiedl’s recommendation that we retire the term “Best Practices,” is to replace it with the term “Effective Practices.”

   The two related reasons are:

·       Many educators hear the term “Best Practices,” think that the recommended practices will make them work “over and above” what really is necessary, and ask, “Why can’t we just do what is needed to make us successful? Why do we have to go ‘above and beyond’?” 

Quite simply: When educators hear “Effective Practices,” they are more comfortable that the recommended practices address the questions above.

_ _ _ _ _

·       Many administrators and school board members hear the term “Best Practices,” think that the recommended practices will be overly expensive, and ask, “Why are you selling us a Lexus, when all we need is a Toyota?” 

Once again, when they hear “Effective Practices,” they are comfortable that the costs will result in the expected outcomes, and a lesser amount might undercut these outcomes.

_ _ _ _ _

   Finally, as long as we are retiring the term “Best Practices,” let’s also reconsider the use of Pilot Projects.

   In my experience, districts and schools most often implement Pilot Projects when a program or approach:

·       Is being pushed by small groups of educators, and their administrators really are not terribly interested, but they nonetheless do not want to completely discourage the group or tell them “no” straight out; 

·       Has questionable research or is unproven with the projected group(s) of students, staff, or schools; or the district or school

·       Doesn’t have (and may never have) the money or resources to go “all-in” on the program or approach.

   But Pilot Projects are also often recommended when well-validated programs, curricula, or interventions—that would have long-term positive impacts on students, staff, and schools—are suggested, and the administrators in question really don’t like the approach (or, sometimes, the individuals making the proposal).

   Here, the administrators want to appear “open to new ideas,” but they really are hoping that the pilot will fail or the individuals will become discouraged.

   Even when implemented and successful, most pilot projects rarely are scaled up. This is because: 

·       Those (usually, school staff) who do not want a successful pilot project to expand to their school, department, or grade level, find ways to question, minimize, reject, or cast doubt on its ability to be scaled-up or to work “in our school with our staff and our students;” and

·       Those (usually, district administrators) who do not want the successful pilot project to expand, cite the scale-up’s resources and costs, and its “competition” with other district priorities as reasons to not take the next steps.

   As an outside consultant, given the circumstances above and—especially—the low potential for eventual system-wide scale-up, I almost never agree to work in a district on a “pilot project.”

   For a district-employed staff, know that your involvement in a pilot project may result in angry, jealous, or slighted colleagues. . . especially when they perceive you as receiving “special” attention, releases, resources, or privileges.

   On a semantic level, I understand that some programs, curricula, or interventions need to be “Field-Tested”. . . so let’s use this term. The term “Pilot Project” simply carries too much baggage. . . and this baggage, once again, predicts that the approach will never be fully implemented to benefit the students, staff, and schools that it might.

_ _ _ _ _ _ _ _ _ _

Summary

   Building on Part I of this two-part Series, this Blog Part II first discussed the evaluative approaches used by the What Works Clearinghouse for education and the Evidence-Based Practices Resource Center for mental health to rate the implementation of specific programs, curricula, and interventions in districts, schools, and other educational settings.

   We then summarized the five “mistakes” that educators should avoid when choosing evidence-based programs. These mistakes are:

·       Equating Research Quality with Program Quality

·       Looking only at the summary (or rating)

·       Focusing too much on eect size

·       Forgetting whom the program serves

·       Taking ‘no eect for a conclusive answer

   Finally, we expanded the discussion, addressing why education should change the term “Best Practices” to “Effective Practices,” and why educators should be wary when administrators give permission for “Pilot Projects” in lieu of the full, system-wide implementation of well-validated programs, curricula, or interventions.

_ _ _ _ _

A Funding Opportunity: Speaking of Evidence-based Programs

   When districts or schools are interested in implementing my work—especially when funding is dwindling or short, I often partner with them and help them write (often, five-year) federal grants from the U.S. Department of Education.

   To this end:

   A new $4 million grant program is coming up in a few months that focuses on moderate to large school districts with at least 25 elementary schools.

   As we can submit multiple grants from different districts, if you are interested in discussing this grant and a partnership with me, call (813-495-3318) or drop me an e-mail as soon as possible (howieknoff1@projectachieve.info).

   A separate five-year $4 million grant program will likely be announced a year from now. This program is open to districts of all sizes.

   If you are interested, once again, it is not too early to talk.

   BOTH grant programs focus on (a) school safety, climate, and discipline; (b) classroom relationships, behavior management, and engagement; and (c) teaching students interpersonal, conflict prevention and resolution, social problem-solving, and emotional awareness, control, communication, and coping skills and interactions.

   If we partner, I will write the bulk of the Grant proposal (at no cost), and guide you through its submission.

   Beyond these grants, if you are interested in my work for your school or educational setting, I am happy to provide a free consultation for you and your team to discuss needs, current status, goals, and possible approaches.

   Call me or drop me an e-mail, and let’s get started.

Best,

Howie

[CLICK HERE to read this Blog on the Project ACHIEVE Webpage]