Saturday, September 21, 2024

Research Teases Out the Impact of Adverse Childhood Experiences

But Many Educators Still Don’t Understand Social-Emotional Screeners, and the Limitations of ACEs-Only Assessments

[CLICK HERE to read this Blog on the Project ACHIEVE Webpage]

 

Dear Colleagues,

Introduction

   In my July 27, 2024 Blog a few months ago, I discussed the results of a recent School Pulse Panel survey organized by the National Center for Education Statistics (NCES). Completed between May 14 and May 28, 2024, the survey involved 1,714 public school K-12 leaders from every state in the country and Washington, D.C. focusing on their perspectives of the most compelling social, emotional, behavioral, and mental health concerns in their schools.

   The article reported that the survey’s respondents identified a significant and wide variety of challenges in these areas. . . results echoed this past year in other research and national reports.

   For example:

·        83% reported that the pandemic and its lingering eects continue to negatively influence the social-emotional development of students;

·    76% of the public school leaders said they need “more support for student and/or staff mental health”; 

·      75% reported that students’ lack of focus or inattention had either a “moderate” or “severe” negative impact on learning during the 2023-24 school year;

·        71% need “more training on supporting students’ socioemotional development;”

 

·        45% reported having confiscated a weapon from students during the year;


·     36% reported that student acts of disrespect toward teachers or staff members, other than verbal abuse, occurred at least once a week;


·     30% reported instances of cyberbullying that happened at and outside of school at least once a week; and


·      20% reported that threats of physical attacks or fights between students occurred at least once a week.

   From a screening perspective, some schools use specific assessments or tools to identify students with possible social, emotional, or behavioral challenges that need follow-up.

   It is critical to note—right from the beginning—that Educators:

·       Read the technical manuals for social-emotional screening tools they are considering to determine if they are well-normed, reliable and valid, and applicable to the students in their specific schools; 

·       Recognize that screening tools are not diagnostic tools that are sensitive enough to differentially determine the specific social-emotional concerns or interventions needed by a student; 

·       Reflect that screening tools result in False-Positive and False-Negative results—where students are incorrectly identified as having a “problem” (that does not exist), or as not having a “problem” (where one actually exists), respectively; and 

·       Remember that screening tools must be followed up by diagnostic assessments that both validate the screening tool’s results, and determine the specific clinical concerns presented by a student.

_ _ _ _ _

The ACEs  

   One screening tool area that has received a fair amount of attention over the recent years measures students’ Adverse Childhood Experiences (ACEs).

   There are serious concerns with these scales in that some believe that: (a) the ACEs areas reliably, validly, and diagnostically assess “traumatic” events in students’ lives; (b) a high number of ACEs events or scores predicts individuals who will exhibit current or eventual social, emotional, behavioral, and mental health challenges; and that (for the very uninformed) (c) this relationship is causal, as opposed to correlational (and what that means).

   Through this Blog, we want to make sure that every educator understands what an ACEs scale is really measuring, and we report on a recent study that begins the process of helping us understand what an ACEs screening survey score may actually correlate with in our classrooms and schools.

   As we try to “cut through the talk,” we hope to solidify educators’ understanding that an ACEs screening has more limitations than strengths relative to their applicability to classrooms and schools.

_ _ _ _ _ _ _ _ _ _

What the ACEs Research Is and Isn’t

   The original ACEs Study was conducted by the Kaiser Permanente Health Maintenance Organization (HMO) in Southern California from 1995 to 1997 with two waves of data collection. As they were receiving physical exams, over 17,000 HMO members completed confidential surveys regarding their childhood experiences and their current health status and behaviors. Significantly, beyond the fact that the sample was from a limited geographic area, the participants were primarily white and from the middle class.

   Below are the actual ACEs Study Questions. Each “Yes” response received one point toward the “final score.” As educators, please read these items relative to today’s students. Think about how many of your students have experienced four or more of these events so far in their lives (more on that below).

While you were growing up, during your first 18 years of life:

 

1. Emotional Abuse. Did a parent or other adult in the household often or very often… Swear at you, insult you, put you down, or humiliate you?

or Act in a way that made you afraid that you might be physically hurt?

 

2. Physical Abuse. Did a parent or other adult in the household often or very often… Push, grab, slap, or throw something at you?

or Ever hit you so hard that you had marks or were injured?

 

3. Sexual Abuse. Did an adult or person at least 5 years older than you ever…

Touch or fondle you or have you touch their body in a sexual way?

or Attempt or actually have oral, anal, or vaginal intercourse with you?

 

4. Emotional Neglect. Did you often or very often feel that … No one in your family loved you or thought you were important or special?

or Your family didn’t look out for each other, feel close to each other, or support each other?

 

5. Physical Neglect. Did you often or very often feel that … You didn’t have enough to eat, had to wear dirty clothes, and had no one to protect you?

or Your parents were too drunk or high to take care of you or take you to the doctor if you needed it?

 

6. Parental Separation or Divorce. Were your parents ever separated or divorced?

 

7. Mother Treated Violently. Was your mother or stepmother: Often or very often pushed, grabbed, slapped, or had something thrown at her?

or Sometimes, often, or very often kicked, bitten, hit with a fist, or hit with something hard?

or Ever repeatedly hit at least a few minutes or threatened with a gun or knife?

 

8. Household Substance Abuse. Did you live with anyone who was a problem drinker or alcoholic or who used street drugs?

 

9. Household Mental Illness. Was a household member depressed or mentally ill, or did a household member attempt suicide?

 

10. Incarcerated Household Member. Did a household member go to prison?

_ _ _ _ _

   Initially, it is paramount to remind educators that “Trauma” is clinically defined—from a psychological/psychiatric perspective—and it is diagnostically differentiated from related mental health challenges like stress, anxiety, and fear.

   We have discussed this in a previous Blog, showing that the clinical definition and criteria for a diagnosis of “trauma” is far more narrow than the way it is represented in the popular press.

   Indeed, given the criteria, there are far fewer children, adolescents, and adults who are clinically traumatized than reported in the popular press, and when contrasted with individuals clinically affected by stress, anxiety, or fear.

   To read more about this, go to:

August 8, 2020 

Why Stress-Informed Schools Must Precede Trauma-Informed Schools: When We Address Student Stress First, We Begin to Impact Trauma. . . If It Exists

[CLICK HERE for this BLOG]

_ _ _ _ _

   Continuing the ACEs discussion begun above:

  The most critical concerns with the ACEs Questions are:

·       They do not discriminate between “finite” events (e.g., having a household member incarcerated) and events that can occur over time or in a repeated way; 

·       Thus, they do not quantify many of the events (e.g., how long was the separation, how many times was your mother physically threatened); 

·       They do not identify the age (or age range) when the child or adolescent experienced each event;

·       They do not ask for a rating of the intensity of each event (e.g., along a Mild-Moderate-Severe continuum);

·       They do not get a rating of the emotional impact of each event at the time that it occurred (e.g., along a None-Low-Mild-Moderate-Significant-Life Changing continuum); and

·       They do not get a rating of the current (assuming an event occurred in the past) and/or continuing emotional impact of each event.

   This leads to a critical conclusion:

Given the absence of this critical contextual information, educators (and others) do not really know the cumulative depth, breadth, intensity, or impact of an individual’s projected traumatic history from an ACEs screening.

 

Indeed, the screening may simply tell us how many challenging events an individual may have experienced. It does not tell us if one or more of the events were traumatic for an individual, or if they continue to be traumatic.

_ _ _ _ _

   Briefly, the results of the original ACEs study indicated that:

·        About two-thirds of participants reported at least one adverse childhood experience;

 

·   The number of ACEs points were strongly associated with high-risk health behaviors during adulthood such as smoking, alcohol and drug abuse, promiscuity, and severe obesity;

 

·    The number of ACEs points also correlated with depression, heart disease, cancer, chronic lung disease, and a shortened lifespan.

 

·      Compared to an ACEs score of zero, having four adverse childhood experiences (i.e., Four or more ACEs points) was associated with a seven-fold (700%) increase in alcoholism, a doubling of risk of being diagnosed with cancer, and a four-fold increase in emphysema; and

 

·        An ACEs score above six was associated with a 30-fold (3,000%) increase in attempted suicide.

   More than 50 ACEs-related studies have followed the original. These studies have (a) used more diverse and different participant samples—including children and adolescents as respondents; (b) looked at different physical, behavioral, mental health, and life outcomes; (c) adapted the original ACE survey and methodology; and (d) replicated many of the correlational (not causal) results from the original study.

   In addition, the concerns highlighted by these studies resulted—starting in 2011 in Florida—in communities beginning trauma-awareness programs; and—about 10 years ago in Massachusetts, Washington, and California—in schools beginning similar trauma-related initiatives.

   Relative to prevention, a 2016 Center for Disease Control and Prevention Monograph made the following recommendations:


_ _ _ _ _

   Critically, as noted, the ACEs surveys or scales reviewed are screening tools that are not very sensitive and may, in fact, be biased in their attempts to correlate a number of challenging events in students’ lives with their current social, emotional, behavioral, or mental health status.

   In point of fact, an ACEs scale:

·        Is not a reliable or valid diagnostic instrument;

·        It cannot draw causal connections between any number of challenging life events and a student’s current social, emotional, or behavioral status; and

·        If used as the only social-emotional screener by a school, it has a high potential of both over-identifying some students, while under-identifying others.

   If a district or school wants to use a social, emotional, behavioral screening tool (which is not necessarily recommended in all cases), there are a number of far more effective, psychometrically-sound tools available. . . that provide information and context well beyond the current ACEs screeners.

_ _ _ _ _ _ _ _ _ _

Kindergarten Readiness Not Impacted by a High Number of ACEs

   Earlier this month (September 11, 2024), an article was published in K12 Dive reviewing a study published in the Journal of Child and Family Studies involving 115 preschoolers attending a comprehensive school readiness summer program in Miami in 2017, 2018, and 2019. The children were transitioning between preschool and kindergarten, and they were enrolled because they were exhibiting disruptive behavior problems at home and in school.

   After completing an ACEs survey with the children’s parents or caregivers, the study reported that nearly all of the children had experienced poverty, about 94% had experienced at least one ACE, and 49% had experienced four or more ACEs. Only 6% of the children in the study had experienced no ACEs.

   The K12 Dive article summarized the study’s results and implications as follows:

·       The study found a correlation between the number of adverse childhood experiences faced by rising kindergartners and the severity of their disruptive behaviors, anxiety, and depression.

 

·       But, the correlation between the ACEs and the students’ performance did not hold for the students’ academic and social readiness, with those skills being comparable with peers experiencing fewer harmful events.

   During an interview with the study’s lead author, she noted that the ACEs’ correlation with students’ internalizing and disruptive behavior was expected, but she was surprised that there was no association between a student’s ACEs score and his or her academic functioning, for example, in early math and reading skills.

_ _ _ _ _ _ _ _ _ _

ACEs Implications and Practical Next Steps

   This study provides educators both good news and bad news.

   The good news is that research is starting (continuing) to investigate the functional relationship between the number of ACEs experienced by a student and a variety of academic and social, emotional, behavioral, and mental health outcomes.

   The bad news (for some believing that ACEs screenings portend important school outcomes) is that—while this study focused only on a relatively small number of very young students in one geographic area—it, as above, (a) confirmed a logical and expected correlation between the ACEs and participating students’ emotional and behavioral status, but (b) rejected what many educators assumed would be a similar ACEs correlation with their academic status.

   As noted earlier, we cannot conclude from even the one confirmed correlation above that a high ACEs score means that a student is “traumatized,” or that any of the ACEs events were the reasons behind a student’s current social, emotional, behavioral, or mental health status.

   Indeed, to really understand the results of this study, every student considered “at-risk” by the ACEs screener would need to be diagnostically assessed to determine exactly what appropriate and challenging behaviors they were demonstrating and when during the school day, as well as why the challenging behaviors were occurring (i.e., the root causes) and how they were being triggered.

   Critically, students’ social, emotional, and behavioral challenges are triggered in many different ways— often well beyond any of the events described in an ACEs screener. Indeed, if you re-read the ACEs screening questions above, you will note that the vast majority of the items focus on home and family-related events.

   Below is a table with the original ten ACEs areas, and X’s in boxes indicating that many of the ACEs events could occur in Community or School settings, as well as with or due to Peers.

   The point here is that:  The ACEs events or issues are not limited to our students’ familial experiences. Moreover, social, emotional, behavioral, and mental health assessments need to include multiple student settings, sources, and events.

   Said a different way:

Trauma, stress, anxiety, and other social-emotional issues are not setting-specific. They are event-dependent. Events obviously can be experienced outside of the family or home, and they can be just as emotionally debilitating as those measured by an ACEs screening.


_ _ _ _ _

   But even beyond the ACEs events above, there are many other life experiences that trigger students’ social, emotional, and interpersonal challenges.

   These may include:

·       Academic Frustration

·       Test/Homework/Work Completion Anxiety

·       Peer (including Girlfriend/Boyfriend) Conflicts/Rejection

·       Teasing and Bullying—Direct, Indirect, Social, and Social Media

·       Gender Status or Discrimination

·       Racial or Multi-Cultural Status or Discrimination

·       Sexual Identification or Orientation Discrimination

·       Socio-economic Status or Discrimination

·       Circumstances Related to Poverty/Parental Income

·       Family Moves/Housing Mobility/Homelessness

·       Competition/Losing

·       Physical or Other Limitations or Disabilities

   On a situational level, these triggers can product emotional reactions that are just as quick and intense as those that are family- or trauma-related, and these events need to be consciously factored into a social, emotional, or behavioral screening.

   The Take-Aways here include the following:

·       There are multiple circumstances or events that trigger students’ emotionality in school. Many of them are not specifically (or by definition) traumatic events and, thus, schools that are too focused on trauma may easily miss them.

·       Schools need to assess and identify the emotional triggers that are most prevalent across their students, and they should target these emotional triggers with their preventative services, supports, and interventions.

At the Tier 1 level, these triggers need to be integrated into the schools’ social skills curricula at the prevention and early response levels.

At the Tier 2 and 3 levels, these triggers need to frame the strategic or intensive interventions or therapies that related services personnel prepare to deliver.

·       Finally, schools and districts need to be prepared to deliver the full multi-tiered continuum of services, supports, strategies, and interventions for students with social, emotional, behavioral, and mental health challenges.  This includes the necessary training, resources, and personnel both in general, and as needed on a year-to-year basis.

   Some of the Tier 2 or 3 clinical interventions that may be needed at the deeper levels of the multi-tiered continuum include:

·       Progressive Muscle Relaxation Therapy and Stress Management

·       Emotional Self-Management (Self-awareness, Self-instruction, Self-monitoring, Self-evaluation, and Self-reinforcement) Training

·       Emotional/Anger Control and Management Therapy

·       Self-Talk and Attribution (Re)Training

·       Thought Stopping approaches

·       Systematic Desensitization

·       Trauma-Focused Cognitive Behavioral Therapy (TF-CBT)

·       Cognitive-Behavioral Intervention for Trauma in Schools (CBITS)

·       Structured Psychotherapy for Adolescents Responding to Chronic Stress (SPARCS)

·       Trauma Systems Therapy (TST)

   Ultimately, districts and schools need to ask themselves: 

Do your related service professionals have the skills to clinically deliver (as needed, and based on student-centered diagnostic assessments) some or all of the strategies or therapies above. . . and/or, are they available from the mental health professionals who are practicing in your community?

_ _ _ _ _ _ _ _ _ _

Summary

   We started this Blog journey by revisiting the many social, emotional, behavioral, and mental health concerns this past school year as documented by the National Center for Education Statistics. While many of these concerns have existed for years, the striking outcome is their elevation especially due to the pandemic.

   To analyze these concerns on a local level, many districts use a social-emotional screening process with their entire student populations. Critically, the Blog discussed the limitations of these screening instruments, emphasizing that they make errors, and that all screening results must be validated through individual student diagnostic assessments.

   The Blog then focused on screeners that assess for students’ Adverse Childhood Experiences (ACEs). We detailed the history, psychometric properties, and research with ACEs assessments, noting serious limitations with their validity and ability to causally explain students’ social-emotional difficulties. We concluded that ACEs screeners are not good assessments for root cause analyses or to ecologically measure traumatic life events.

   We reviewed a recently published study analyzing preschool to first grade students that found the ACEs correlated with social-emotional but not academic performance. We identified the many reasons (beyond traumatic events) that explain students’ social-emotional challenges, and discussed some of the Tier 2 and 3 interventions available to help.

   Our ultimate conclusion was that:

We cannot conclude that a high ACEs score means that a student is “traumatized,” or that any of the ACEs events were the reasons behind a student’s current social, emotional, behavioral, or mental health status.

 

ACEs surveys are screening tools. Given the absence of critical contextual information within the individual events assessed through the ACEs’ items, educators (and others) do not really know the cumulative depth, breadth, intensity, or impact of an individual’s projected traumatic history from an ACEs screening.

 

Indeed, the screening may simply tell us how many challenging events an individual may have experienced. It does not tell us if one or more of the events were traumatic for an individual, or if they continue to be traumatic. This information can only come from an individual diagnostic assessment process that identifies past and present social, emotional, behavioral, and mental health status; the ecologically-based root causes of any significant challenges; and what specific evidence-based services, supports, and/or interventions link to specific root causes.

   All of this is focused on helping schools to most effectively address the social, emotional, and behavioral needs of all students—with a focus on their corresponding self-management. When students have social, emotional, and behavioral self-management skills, and the peer, staff, and school support to facilitate them, issues related to—for example—stress, anxiety, fear, trauma and their emotional triggers become less evident. . . because they are handling, addressing, and coping with them.

   Our schools still have a ways to go. But teachers, support staff, and administrators need the (right) training, professional development, and support; and schools need to have the mental health and related service colleagues.  Otherwise, the gaps will (continue to) undermine all of the best intentions, plans, and actions.

_ _ _ _ _

   Now that everyone across the country has begun their school year, we hope that this Blog is helpful and relevant to your planning and implementation.

   If you would like to discuss these issues (or others) with me as part of a free virtual consultation, please drop me an e-mail (howieknoff1@projectachieve.info) so we can set up a Zoom call to look at your needs and gaps . . . and how to close these gaps and attain the outcomes that you want for students and staff.

   Together, I know that we can make this the school year that you want and that every student deserves.

Best,

Howie

 

[CLICK HERE to read this Blog on the Project ACHIEVE Webpage]

Saturday, September 7, 2024

How Fad or Flawed School Programs Increase Poor Teacher Morale and Resistance to Change (Part V)

When Education Keeps Adopting the Same Shaky Stuff, It Will Keep Getting Repeated Rocky Results

[CLICK HERE to read this Blog on the Project ACHIEVE Webpage]

 

Dear Colleagues,

Introduction

   A few weeks ago, I was interviewed by two entrepreneurs who had developed an online program that they were selling to schools to teach students social, cooperative, and leadership skills. The program was game-based. . . meaning that students worked together on a series of online games that required them to virtually communicate, cooperate, and make consensual decisions.

   Over time, I learned that (a) they seemed to be well-meaning and enthusiastic—if not idealistic—professionals who—it appeared—were trying to help students, staff, and schools in the areas above; (b) they had some educational background—but were really businessmen who also admittedly were interested in making money; and (c) they had a psychologist—albeit not a school or developmental psychologist—on their payroll to “ensure” the pedagogical “integrity” of their “instructional process.”

   The “success” that they currently touted on their website was based on the fact that (a) virtually all of the students looked forward to their “game time” each week; (b) the students stayed “engaged” on the platform after each game began; and (c) the students (and some teachers) reported that the students were “more cooperative” and exhibited “more leadership behavior” after playing the games.

   Ever the scientist-practitioner, I suggested to my colleagues that these outcomes did not objectively and scientifically prove that the program taught the participating students anything because—to cite just four examples:

·       They did no assessments of the students’ pre-existing social, cooperative, or leadership skills. . . and some students may have already had these skills—even if they never demonstrated them at school; 

·       Any behavioral improvements might have been a placebo and/or sponsorship or social desirability bias effect because the schools chose and purchased the program, the students and teachers knew the expected outcomes, and everyone was getting “special attention” from the game developers; 

·       The “outcomes” were only self-reported outcomes. . . that is, there were no objective behavioral observations of the students’ leadership behavior in their classrooms. . . during academic tasks that required the same skills as during the game simulations; and

·       The skills being “taught” by the game were hopelessly confounded as the games were simultaneously “teaching” such diverse skills as listening, following directions, addressing disagreements and conflicts, making joint decisions, and dealing with frustration and disappointment.

   These are not criticisms. They are, instead, scientific and methodological critiques.

   And yet. . . generalizing from my brief interactions with these entrepreneurs. . . it is my experience that:

·       Their approach to evaluating and “validating” their product is strikingly similar to how many educators (and other entrepreneurs) assess their instruction, programs, interventions, and purchased “innovations”; 

·       Many schools do not evaluate their curricula, programs, interventions, student information systems, etc. in methodologically, empirically, and functionally-sound ways before spending money and time, training and resources, and staff commitment and faith on less-than-sound approaches; and

·       These faulty patterns repeat as schools often move from one unsuccessful “solution” to the next one. . . without understanding that they need to change how they are selecting their solutions, rather than which solutions they have selected.

_ _ _ _ _ _ _ _ _ _

Putting a “Spotlight” on Fad or Flawed Approaches

Every time you do an intervention with a struggling student that doesn’t work,you potentially make the student more resistant to the next intervention.  

Every time you do a school-wide intervention with a struggling staff that doesn’t work, you potentially make the staff more resistant to the next school-wide intervention.

                                                                        Howie Knoff

   A recent (August 6, 2024) EdSurge article cogently analyzed the implications when educators fall prey to the “entrepreneurial glitter” that leads to the implementation of an educational fad in their schools.

   In past Blogs, we have extensively covered this phenomenon, exposing many past and still-present fads through objective data and sound, independent research.

   We have also “spotlighted” (metaphorically; see film clip below) the fact that some flawed approaches—like the “upper-case” PBIS, MTSS, SEL, and RtI frameworks. . . and the Restorative Justice, Trauma-Sensitive, and Mindfulness school programs—are promoted by the U.S. Department of Education (or its many tax-payer-funded National Technical Assistance Centers), private political organizations (like CASEL), or private entrepreneurial organizations (like the International Institute for Restorative Practices—IIRP).

   While these organizations make their approaches appear legitimate and sustainable, when they are analyzed in-depth, they are actually “Emperors without empirically-based clothes.” And many people know. . .


   The point here is that, as in the quote above, when these flawed frameworks or programs predictably fail or fade away, there are student and staff losses.

·       The First Loss is the loss of time, resources, and other investments on approaches that, once again—predictably, are not successful. . . and that take the place of other truly evidence-based approaches that have much higher probabilities of student success.

·       The Second Loss is the time and effort needed to “undo the damage” to the original problem or challenge. . . that is now more complex, confounded, or extreme than it was before the flawed framework or program was tried.

·       The Third Loss is to the students and staff who are now more resistance to change and/or the “next” approach because everyone trusted and invested their energy into the flawed framework or program. . . that did not (predictably) work. 

Critically, this Loss is the most insidious of all as it poisons students’ and staff’s willingness to try the “next” intervention. . . even when the next intervention may actually be the “right” intervention.

   So. . . there are losses here. Whether to the individual student in-need (and those working with her or him), or to the groups of students in-need (and an entire school community working with them). . . there are losses here.

_ _ _ _ _

   The EdSurge article asserts that education doesn’t seem to be learning from its past mistakes. Instead, education seems to move from fad to fad.

   EdSurge quoted James Stigler, a distinguished professor of psychology at the University of California, Los Angeles:

It's not that specific reform ideas are fads. It's that schools seem susceptible to fads because people don't understand what it means to take an idea seriously.

 

In reality, many ideas out there haven’t been properly tried out, because that would mean focusing largely on how they are put into practice in classrooms. There are probably a lot of ideas out there that are effective, but nobody knows what they are.

_ _ _ _ _

   The article then quoted Adrian Simpson, principal of St. Mary's College at Durham University in England and professor of mathematics education:

Evidence is the magic word. It’s also the source of part of the problem.

 

Those questing for evidence-based education approaches tend to rely on randomized controlled trials, a robust form of study widely used in medicine to establish causation. In education, that can mean field experiments that show a practice worked in a particular context or laboratory experiments in cognitive science.

 

But what [these] tell you is very powerful, but very narrow. These studies are taken to show that certain approaches work. But, they only really establish that the sum of all the differences in interventions caused learning for some participants. Which specific intervention worked, and whether it would work for other students, is hard to determine.

 

So it’s tricky to translate the lessons of these experiments into learning. Researchers also understand less about the mechanisms of how people think about, say, fractions than how kidneys function. So the evidence provided by experiments about specific practices in education is weaker than in other areas like medicine where it tends to be similar from person to person. You can’t establish laws of the classroom that will apply everywhere.

 

Ultimately, there’s no quick fix for the reform cycle. Teachers should bring together insights from a number of sources—from research about memory capacity to tips from the teacher next door—to inform how they unlock learning for their students. Rather than asking what they can do to make a student better with fractions, a teacher might ask: ‘What’s causing this child to handle fractions poorly?’ That could provide an insight that isn’t solely focused on teacher interventions which could, nonetheless, help the student learn.

_ _ _ _ _

   The necessary conclusion here is that educators need to be wary about inadvertently implementing flawed frameworks, programs, quick-fix solutions, and fads in their schools and classrooms.

   As advocated by Simpson above, educators—like medical doctors—need to utilize data-based problem-solving approaches that are anchored by evidence-based blueprints.

   These blueprints identify the components that contribute to the fullest understanding of a specific educational area.

   An analysis of these components are used to frame the root cause analysis needed to determine the underlying reasons for students’ challenges in these specific areas. And the results of this analysis are then linked to the services, supports, and interventions needed to address the existing challenges.

   It is during the Problem Identification stage that Simpson’s question above (“What’s causing this child to handle fractions poorly?”) is particularly important as it frames the root cause analyses that follow.

   When moving from problem analysis to intervention, Simpson’s advice above (“Teachers should bring together insights from a number of sources—from research about memory capacity to tips from the teacher next door—to inform how they unlock learning for their students”) is particularly prescient.

   Here, he is emphasizing the importance of a science-to-practice perspective when interventions are being developed. This occurs as experienced teachers utilize the evidence-based blueprint, relevant to the problem at-hand, to determine which research-based interventions have the highest probability of succeeding with an individual—and sometimes, unique—student.

   Significantly, we have discussed a number of these blueprints in this Blog Series:

July 13, 2024   Part I:

The Seven Sure Solutions for Continuous Student and School Success: “If You Don’t Know Where You’re Going, Any Road Will Get You There”

[CLICK HERE to LINK to this BLOG]

_ _ _ _ _

July 27, 2024   Part II:

Are Schools Really Prepared to Address Educators’ Biggest Behavioral Student Concerns Right Now? “We’ve Got Serious Problems and We Need Serious People”

[CLICK HERE to LINK to this BLOG]

_ _ _ _ _

August 10, 2024   Part III:

Will Your School “Win the Gold” for Your Students This Year? Why the U.S. Women’s Gold Medal Olympic Gymnastics Team is a Model for All Schools (Part III)

[CLICK HERE to LINK to this BLOG]

_ _ _ _ _

August 24, 2024   Part IV:

Students' Health, Mental Health, and Well-Being Worsens Over the Past 10 Years: Summary of the August 2024 Centers for Disease Control and Prevention Report

[CLICK HERE to LINK to this BLOG]

_ _ _ _ _

   We encourage you to (re)read one or more of these Blogs in the context of this Blog’s discussion.

   But we also want to make one more connection before completing this Blog Series.

_ _ _ _ _ _ _ _ _ _

Teacher Morale and Fad or Flawed Frameworks and Programs

   An August 12, 2024 Education Week article reported on a survey of 1,487 public school teachers and 131 private school teachers between January and March of this year.

   The article reported:

After an uptick in morale last year, teachers nationally are saying that their mental health has worsened and that they are less satisfied with their careers than they were a year ago.

 

Teacher job satisfaction appeared to reach an all-time low in 2022, with 12% of public school teachers saying they were very satisfied with their jobs as they grappled with the fallout from disruption caused by the COVID-19 pandemic. Though job satisfaction climbed last year, it slipped slightly this year to 18% of public school teachers saying they were very satisfied. This is still much lower than it was decades ago.

 

The results come as classrooms are facing increased scrutiny and politicization, students are grappling with mental health concerns and increasing behavioral challenges, and teachers’ pay remains low. Even as educators report more difficult work environments, though, they report having minimal or nonexistent programming to support their mental well-being.

 

Here are key takeaways from the report.

 

1. Teachers want better working conditions and higher pay

 

Past surveys have found that teachers are experiencing poor mental health. However, teachers often point to changes in working conditions—and compensation—as things that would improve their mental health, according to the report.

_ _ _ _ _

 

2. Teachers are increasingly concerned with student discipline

 

Student behavior since the pandemic has increasingly become a concern for educators. More public school teachers wanted to see more support for dealing with student discipline issues. Elementary (74%) and middle school (71%) teachers, and teachers in suburban districts, are more likely to say that more support for dealing with discipline would help improve their mental health.

 

In the survey, teachers wrote in open responses that they felt they weren’t able to discipline students. Even when misbehaving students are sent out of class, they come back with snacks and don’t change their behavior.

_ _ _ _ _

 

3. Administrator support has an impact on well-being

 

Only a small amount of educators said that mental health programming for teachers is extensive. A larger amount—22% of public school teachers, and 24% of private school teachers—said their schools don’t offer any type of mental health programming.

 

Staff were more likely to say their principals provided a lot of concrete support this year compared to last year, but still, more than a quarter of the respondents said the administration did not provide any support at all.

 

Survey results also suggested that positive support from building leadership was not the norm. 11% of public school teachers said their principals provided “a lot” of support of their mental health, whereas 1 in 3 teachers said their principals offered no support at all. Multiple teachers reported in open responses that their principals had negative impacts on their well-being.

_ _ _ _ _

 

4. Student and teacher mental well-being is connected

 

Student mental health has become a more pressing concern following the pandemic’s disruptions, with schools feeling less equipped to support students’ needs. Teachers who say their own mental health is negatively affecting their work are more likely to say that students’ mental health can also have a detrimental impact on learning.

 

There are some positives: Compared to last year, a smaller percentage of teachers reported that students’ mental health concerns are negatively affecting learning and behavior. But a larger share of teachers also reported that students’ mental health declined over the course of the school year, rather than improving.

 

A majority of teachers recommended at least one student receive intervention or counseling services last school year, and roughly half of teachers said their schools needed more counselors, psychologists, and social workers.

 

In an open-ended question, a majority of teachers—83%—said they should be expected to support students’ mental health and refer students to professionals when needed, but the most common response was that teachers should not replace mental health professionals.

_ _ _ _ _

   Significantly, after reviewing the entire Report and its 32-question survey, it was noted that:

·       Some of the survey questions had “forced-choice” answers where respondents were given:

(a) a specific metric (e.g., “Worse,” “The Same,” or “Better”; or “Negative,” “Positive,” or “Neutral”) to select for their answer; or

(b) lists of items to select from as tbeir “answers” (e.g., “Choose from the list below those additional steps that your district or school could take to better support student mental well-being”; or “What steps could your district or school take to support your mental well-being? Select all that apply”).

_ _ _ _ _

·       Other questions were open-ended or free-response questions that were then coded to fit the responses from the previous year’s survey so they could be compared. 

This “forced-coding” approach may have sacrificed some of the nuances in the 2024 data relative to new issues that have emerged only this past year.

_ _ _ _ _

   Notably: The survey did not directly ask its teachers to specify, for example, the different reasons why their job satisfaction was so low, or why students’ mental health was “worse when compared to the beginning of the 2023-24 school year.”

   And yet, when reading the Report, some district or school staff might incorrectly connect certain highlighted questions as “the reasons” underlying teachers’ current dissatisfaction or mental health issues. . . and then begin initiatives to “address the problem.”

   Hence, this Report might inadvertently create a search resulting in the implementation of fad or flawed programs. . . as discussed earlier in this Blog.

   For example, the Report provided data on “the percent of teachers saying that more/better support for student discipline-related issues is a step their districts/schools could take to improve their mental health.”

   And while this is an important question, it is likely that student discipline is not the only (or even, the predominant) reason or area to target relative to interventions to improve teacher satisfaction or mental health.

   And yet, if a school were to draw this conclusion, and needlessly re-vamp its school-wide discipline approach with a “fad or flawed” program or framework, this might result in more discipline problems, decreased teacher mental health and wellness, and increased teacher dissatisfaction.

_ _ _ _ _

   As another example, the Report referenced “Social-Emotional Learning” in a number of questions. In fact, “Social-Emotional Learning” was the Number Two-rated “service your district or school currently offers to support the mental well-being of your students.”

   And yet, the original survey never defined or described what it meant by “Social-Emotional Learning.”

   But. . . if a school jumped on this reference in this Report, and introduce a “fad or flawed” SEL program or framework, this might result in more student problems, decreased teacher mental health and wellness, and increased teacher dissatisfaction.

_ _ _ _ _

   The ultimate point here, my friends, is:

When districts and schools do not soundly and objectively analyze the root causes of their teacher morale and student behavior/mental health challenges. . . and they adopt fad or flawed programs or interventions. . . and these programs predictably do not work. . . the district and/or school may inadvertently contribute to both a worsening of the original problems and an increase in staff and student resistance to try any more “recommended” approaches.

_ _ _ _ _ _ _ _ _ _

Summary

The only Real Success in Education is when you can Explain Your Progress, and Extend and Sustain it. One or two years of progress is accomplishment, not success.

 

The only Real Failure in Education is when you cannot explain why you have not Progressed. . . so that you can Change it.

 

                                                                        Howie Knoff

   This is the fifth and last entry in my “Late Summer Blog Series” that highlighted some of the quotes in my June “spotlight” session at The Model Schools Conference in Orlando.

   I have used these quotes to address some current issues and events in education. . . with a focus on helping schools and educational settings to really think about what they want to do—on behalf of their students—during this new school year.

   While there is a lot to unpack in this Blog. . . we discussed the following:

·       Districts and schools need to objectively evaluate the services, supports, interventions, and programs—using sound and accepted methodological, research, and evaluation principles and practices—that they are investigating for student, staff, and school implementation. . . before adopting, purchasing, and beginning the training and implementation process.

Many districts and schools still do not complete (or effectively complete) this necessary due diligence process. . . resulting in too many adopted fads, unproven practices, or flawed frameworks.

_ _ _ _ _

·       Districts and schools need to closely read the original research and reports that are cited or described in the popular (education) press.

Some of these articles are written by authors who have not done, as immediately above, their methodological “due diligence”. . . some are written—given their limited editorial space—in shortened form as the authors assume that the reader will do their own “due diligence”. . . and some are written to highlight only certain elements of the original study (or those elements that the author favors) to make a specific (sometimes biased) point.

Regardless, if districts or schools draw the wrong conclusions or take improper actions based on a popular press article, they are nonetheless responsible.

_ _ _ _ _

·       Finally, and most importantly: districts and schools, obviously, do need to attend to both student and staff health, mental health, and wellness needs. But they need to, first, fully understand the root causes of these needs so that they can address them with the most evidence-based and proven, and highest-probability-of-success approaches.

_ _ _ _ _

   As in the first quote that prefaced this Blog, educators need to avoid implementing poorly-selected approaches that do not succeed, and that both exacerbate the original problems or challenges, and create student and staff resistance to the next approach. . . (which may actually be the “correct” approach).

   As in the quote prefacing this Summary Section, it is important to note that, methodologically, a sustained school-wide approach or initiative cannot be deemed “successful” until it has shown at least three years of objective, data-based progress or accomplishment. . . and until the data causally connect the approach or initiative to the progress or accomplishment.

   Relatedly, there are times when well-selected approaches still do not work.

   While these are certainly “set-backs,” they should not be considered “failures.”

   As above, the only real failure is when educators cannot explain why the set-back occurred.

   Often, once the set-back is understood, staff can make needed “mid-course” corrections—even using the original approach—that produce long-term, sustained success.

_ _ _ _ _

   Now that virtually everyone across the country has begun their school year, we hope that this Blog and the entire Blog Series has been helpful and relevant. We encourage district and school leaders to use one or more of the Blogs in this Series as advanced reading for a staff or committee discussion.

   If you would like me to join that discussion, please drop me an e-mail (howieknoff1@projectachieve.info) so we can set up a free Zoom call to look at your needs and how to close your gaps or attain the outcomes that you want for students and staff.

   Together, I know that we can make this the school year that you want and that every student needs.

Best,

Howie

 

[CLICK HERE to read this Blog on the Project ACHIEVE Webpage]