Showing posts with label formative evaluation. Show all posts
Showing posts with label formative evaluation. Show all posts

Saturday, March 16, 2024

Helping Schools Pick and Implement the Best Evidence-Based Programs (Part II)

Avoiding Mistakes, Best Practices, and Pilot Projects

[CLICK HERE to read this Blog on the Project ACHIEVE Webpage]

Dear Colleagues,

Introduction: Going Backwards to Move Forward

   Districts and schools are always in the process of “buying stuff.”

   They are constantly acquiring curricula, assessments, interventions, technology, professional development, consultants.

   These acquisitions should not be chosen randomly or based on testimonials or marketing promises describing “research” that is methodologically unsound and that does not demonstrate objective and consistently meaningful student, staff, and school outcomes.

_ _ _ _ _

   In Part I of this two-part series, we encouraged districts and schools to make objective, data-driven decisions in these areas, recommending specific definitions and standards. We used the commercials at the Super Bowl as a metaphorical guide.

February 24, 2024

What Super Bowl Commercials Teach Education About Media and Product Literacy: The Language and Process that Helps Schools Vet New Products and Interventions (Part I)

[CLICK HERE to Read and Review]

_ _ _ _ _

   This Blog emphasized and outlined—applying the goals and questions within a sound middle school Media Literacy program—why educators need to be both Media and Product Literate when reviewing and evaluating marketing materials or on-line reviews of curricula, instructional or intervention products, assessment or evaluation tools, professional development programs, or direct and indirect consultation services for purchase.

   We described in detail three common terms used to “validate” these products: “Scientifically-based,” “Evidence-based,” and “Research-based.”

   Here, we asserted the importance that educators understand these terms’ respective (a) definitions, histories, and differences; and (b) the questions and objective criteria needed to determine that a product can validly provide the student, staff, or school outcomes that it asserts.

   We understand that Social Media and Product Literacy—and their accompanying reviews—take time.

   But especially for purchases that will be used or implemented for five or more years (e.g., a new reading, math, or science curriculum or on-line program; a new district Student Information or Data Management System), the review time avoids costly mistakes, and is essential to long-term student, staff, and school success.

   At the end of the Blog Part I, we referenced a recent January 19, 2024 Education Week article that discussed the “Five Mistakes for Educators to Avoid When Picking ‘Evidence-Based’ Programs.”

   In this Blog Part II, we explore this article and its implications to further assist districts and schools before they acquire “new stuff.”

_ _ _ _ _ _ _ _ _ _

Providing Context to Move Forward

   As a national consultant, the selection and implementation of evidence-based programs and practices is a frequent concern as I help districts and schools across the country implement effective academic and social, emotional, and behavioral strategies for their students and staff.

   Two on-line sources of evidence-based programs are the What Works Clearinghouse for education, and the Evidence-Based Practices Resource Center for mental health.

   But, in the face of other ways to conduct sound research that validates different strategies and interventions, both Centers almost exclusively use their “Gold Standard” approach when designating programs or practices as evidence-based.

   This approach typically emphasizes the use of Randomized Control Trials (RCT) that demonstrate that a specific program or practice is causally (not correlationally) responsible for targeted student outcomes.

   In an RCT study, students are randomly assigned to either a Treatment Group (that receives the to-be-evaluated program or practice) or a Control or Comparison Group (that either does not receive the program or practice, or receives an innocuous “placebo” approach that is irrelevant to the targeted outcomes).

   My point here is not to get into a heavy discussion of educational research.

   My point is that—if the above description already has your head spinning, and you are responsible for selecting a strategy or intervention for your classroom, grade-level, department, or school—you may avoid the technical research and then choose the wrong intervention.

   Hence, the “five mistakes” from the Education Week article.

_ _ _ _ _ _ _ _ _ _

Mistakes to Avoid When Choosing Evidence-Based Programs

   The five mistakes that educators need to be mindful of when evaluating and choosing an evidence-based program, curriculum, or intervention are:

·       Equating Research Quality with Program Quality

·       Looking only at the summary (or rating)

·       Focusing too much on eect size

·       Forgetting whom the program serves

·       Taking ‘no eect for a conclusive answer

   To summarize:

   Even when a program, curriculum, or intervention meets the “gold standard” of research, this “designation” may say more about the quality of the research than the quality of the approach.

   This is because the research often does not tease out exactly why the approach was successful—especially when the program, curriculum, or intervention is complex and multi-faceted.

   Indeed, there may be elements of a program that are unsuccessful, but they may be masked by the statistically positive effects of another element that compensates for these faulty elements as the results are pooled.

   Given this, educators must look past the ways that, for example, the What Works Clearinghouse organizes the recommendations in its summaries:

·       For Individual Studies and Intervention Reports: Strong Evidence (Tier 1), Moderate Evidence (Tier 2), Promising Evidence (Tier 3), Uncertain Effects, and Negative Effects; and 

·       For Practice Guides: Strong Evidence (Tier 1), Moderate Evidence (Tier 2), Promising Evidence (Tier 3), or Evidence that Demonstrates a Rationale for a Recommendation (Tier 4). . .

and really read the study(ies) reviewed in a research report, or the methods described in a published research article.

_ _ _ _ _

   Educators must also understand what an effect size represents.

   One of the most common effect size calculations is Cohen’s d. Cohen suggests that a d = 0.2 is a “Small” effect size, a 0.5 d is a “Medium” effect size, and a 0.8 or greater is a “Large” effect size.

   But what does this mean?

   Statistically, a Small (0.2) effect size means that 58% of the Control Group in a study—on the scores or the ratings used to evaluate the program, curriculum, or intervention—fell below the Targeted, Participating, Treatment, Intervention, or Experimental Group.

   A Medium (0.5) effect size means that 69% of the Control Group in a study— on the scores or the ratings used to evaluate the program, curriculum, or intervention—fell below the Targeted, Participating, Treatment, Intervention, or Experimental Group.

  A Large (0.8) effect size means that 79% of the Control Group in a study—on the scores or the ratings used to evaluate the program, curriculum, or intervention—fell below the Targeted, Participating, Treatment, Intervention, or Experimental Group.

   Thus, even with a Large effect size, 21% (i.e., one out of every five students) of a Control Group—that did not participate in, for example, a new reading or social-emotional learning program—showed the same positive progress or response as the group of students who actually participated in the program.

   Critically, even with a 1.4 effect size, 8% of a Control Group demonstrated the same progress or response to a new program as the students who received that program.

_ _ _ _ _

   Moving on: Even when the research for a program, curriculum, or intervention is positive, educators still need to ask the following essential questions:

·       “Was this program, curriculum, or intervention validated for students, staff, and schools like mine”; 

·       “Do I have the time, resources, and support to implement this approach in my setting”; and 

·       “Does my setting’s circumstances (e.g., the need for immediate change because of a crisis situation) match those present in the approach’s validating research?”

_ _ _ _ _

   Finally, when a program, curriculum, or intervention was not validated, educators still need to read the research.

   As alluded to above, sometimes there are other research approaches that might validate the approach that are not preferred or accepted by the What Works Clearinghouse or the Evidence-Based Practices Resource Center.

_ _ _ _ _

   The “bottom line” in all of this is that educators must be committed to (a) objective and data-driven decisions relative to the new programs, curricula, or interventions that they need; (b) they need to understand the methodological and statistical elements that go into the research that have evaluated the approaches they are considering; (c) they need to ensure that the approaches are well-matched to their students, staff, and/or schools; and (d) they need to make sure that they have the time and resources needed to implement the finally-selected approach with integrity and its needed intensity.

_ _ _ _ _ _ _ _ _ _

Post-Script: Avoiding “Best Practices” and “Pilot Projects

   In a February 12, 2024 article in Fast Company, Keyanna Schmiedl explained “Why it’s time to stop saying ‘best practices’ in the business world.”

[CLICK HERE to Link to this Article]

   Discussing her preference for the term “Promising Practices” over “Best Practices,” she stated:

Language is crucial to leadership.

 

A single word or phrase can change the tone of an entire statement, and thus, the message employees take away from it. Those takeaways then develop into attitudes, which influence company culture and productivity.

 

Therein lies the issue with the term best practices. “Best” doesn’t leave room for flexibility and conversation. “Best” implies there’s only one solution or set of solutions to a problem, and that those solutions should remain unchallenged. And when you aren’t ready to challenge the status quo, you aren’t going to make any progress.

 

According to Salesforce, 86% of employees and executives believe a lack of collaboration or ineffective communication is the cause of workplace failures.

 

By adopting an ethos of promising practices—encouraging leaders to build with their employees, rather than simply instructing them on what they think is best—leaders can create the culture of collaboration and accountability needed to foster success.

 

(P)romising practices empower companies to lead with a mindset of humility and growth. Leaders can say, “This practice is hopeful. It brought good results for us, and we think it can bring good results for you, too.” Then, other organizations can take that baseline method and make it work for them.

 

Taking a holistic approach and incorporating the employee voice is what leads to more effective problem-solving, and therefore, the development of promising practices that work better for everyone.

_ _ _ _ _

   Schmiedl’s comments apply directly to districts, schools, and educational leaders.

   However, I recommend two important semantic changes as additional reasons to retire the term “Best Practices” in education.

   The first semantic change is to change Schmiedl’s “baseline method” term to “evidence-based blueprints.”

   In a science-to-practice context—and incorporating this Blog Series’ earlier discussions—I consistently describe the interdependent components that guide successful school change or improvement as existing within “evidence-based blueprints.” These blueprints cover, for example, strategic planning, differentiated instruction, the continuum of academic or social-emotional interventions, and multi-tiered systems of support.

   They are “evidence-based” because all of my work is (through U.S. Substance Abuse and Mental Health Services Administration—SAMHSA) or uses research-to-practice components that are field-proven. That is, across large numbers of schools in diverse settings across the country, objective evaluations have demonstrated our consistent and meaningful student, staff, and school outcomes.

   They are “blueprints” because, as above, they identify the essential interdependent components needed for successful implementation, but give schools the flexibility (a) to include complementary strategies that add depth and breadth; (b) to sequence their activities in strategic and student-need-driven ways; and (c) to align their professional development, coaching, and evaluation approaches to maximize existing resources and staff capabilities.

   The second semantic change—which still supports Schmiedl’s recommendation that we retire the term “Best Practices,” is to replace it with the term “Effective Practices.”

   The two related reasons are:

·       Many educators hear the term “Best Practices,” think that the recommended practices will make them work “over and above” what really is necessary, and ask, “Why can’t we just do what is needed to make us successful? Why do we have to go ‘above and beyond’?” 

Quite simply: When educators hear “Effective Practices,” they are more comfortable that the recommended practices address the questions above.

_ _ _ _ _

·       Many administrators and school board members hear the term “Best Practices,” think that the recommended practices will be overly expensive, and ask, “Why are you selling us a Lexus, when all we need is a Toyota?” 

Once again, when they hear “Effective Practices,” they are comfortable that the costs will result in the expected outcomes, and a lesser amount might undercut these outcomes.

_ _ _ _ _

   Finally, as long as we are retiring the term “Best Practices,” let’s also reconsider the use of Pilot Projects.

   In my experience, districts and schools most often implement Pilot Projects when a program or approach:

·       Is being pushed by small groups of educators, and their administrators really are not terribly interested, but they nonetheless do not want to completely discourage the group or tell them “no” straight out; 

·       Has questionable research or is unproven with the projected group(s) of students, staff, or schools; or the district or school

·       Doesn’t have (and may never have) the money or resources to go “all-in” on the program or approach.

   But Pilot Projects are also often recommended when well-validated programs, curricula, or interventions—that would have long-term positive impacts on students, staff, and schools—are suggested, and the administrators in question really don’t like the approach (or, sometimes, the individuals making the proposal).

   Here, the administrators want to appear “open to new ideas,” but they really are hoping that the pilot will fail or the individuals will become discouraged.

   Even when implemented and successful, most pilot projects rarely are scaled up. This is because: 

·       Those (usually, school staff) who do not want a successful pilot project to expand to their school, department, or grade level, find ways to question, minimize, reject, or cast doubt on its ability to be scaled-up or to work “in our school with our staff and our students;” and

·       Those (usually, district administrators) who do not want the successful pilot project to expand, cite the scale-up’s resources and costs, and its “competition” with other district priorities as reasons to not take the next steps.

   As an outside consultant, given the circumstances above and—especially—the low potential for eventual system-wide scale-up, I almost never agree to work in a district on a “pilot project.”

   For a district-employed staff, know that your involvement in a pilot project may result in angry, jealous, or slighted colleagues. . . especially when they perceive you as receiving “special” attention, releases, resources, or privileges.

   On a semantic level, I understand that some programs, curricula, or interventions need to be “Field-Tested”. . . so let’s use this term. The term “Pilot Project” simply carries too much baggage. . . and this baggage, once again, predicts that the approach will never be fully implemented to benefit the students, staff, and schools that it might.

_ _ _ _ _ _ _ _ _ _

Summary

   Building on Part I of this two-part Series, this Blog Part II first discussed the evaluative approaches used by the What Works Clearinghouse for education and the Evidence-Based Practices Resource Center for mental health to rate the implementation of specific programs, curricula, and interventions in districts, schools, and other educational settings.

   We then summarized the five “mistakes” that educators should avoid when choosing evidence-based programs. These mistakes are:

·       Equating Research Quality with Program Quality

·       Looking only at the summary (or rating)

·       Focusing too much on eect size

·       Forgetting whom the program serves

·       Taking ‘no eect for a conclusive answer

   Finally, we expanded the discussion, addressing why education should change the term “Best Practices” to “Effective Practices,” and why educators should be wary when administrators give permission for “Pilot Projects” in lieu of the full, system-wide implementation of well-validated programs, curricula, or interventions.

_ _ _ _ _

A Funding Opportunity: Speaking of Evidence-based Programs

   When districts or schools are interested in implementing my work—especially when funding is dwindling or short, I often partner with them and help them write (often, five-year) federal grants from the U.S. Department of Education.

   To this end:

   A new $4 million grant program is coming up in a few months that focuses on moderate to large school districts with at least 25 elementary schools.

   As we can submit multiple grants from different districts, if you are interested in discussing this grant and a partnership with me, call (813-495-3318) or drop me an e-mail as soon as possible (howieknoff1@projectachieve.info).

   A separate five-year $4 million grant program will likely be announced a year from now. This program is open to districts of all sizes.

   If you are interested, once again, it is not too early to talk.

   BOTH grant programs focus on (a) school safety, climate, and discipline; (b) classroom relationships, behavior management, and engagement; and (c) teaching students interpersonal, conflict prevention and resolution, social problem-solving, and emotional awareness, control, communication, and coping skills and interactions.

   If we partner, I will write the bulk of the Grant proposal (at no cost), and guide you through its submission.

   Beyond these grants, if you are interested in my work for your school or educational setting, I am happy to provide a free consultation for you and your team to discuss needs, current status, goals, and possible approaches.

   Call me or drop me an e-mail, and let’s get started.

Best,

Howie

[CLICK HERE to read this Blog on the Project ACHIEVE Webpage]

Saturday, December 19, 2015

May the Force be with You- - The Ultimate Organizational Strategies for School Success (Part I)



Dear Colleagues,

   After months of build-up, the new Star Wars movie was released a few days ago. . . not surprisingly, to record audiences. 

   On the one hand, it is amazing that this film series- - which began in 1977, and actually has two more installments planned (in 2017 and 2019)- - has transcended generations of children, adolescents, and adults.

   On the other hand, it is equally amazing that the characters- - from Yoda on- - have influenced the same generations with lessons that apply to friendship, business, and life itself.



   And so. . . in the spirit of holiday giving, this Blog message describes the “ultimate core/essential” strategies for school success- - organized in two sets of 7 C’s.
_ _ _ _ _ _ _ _ _ _

Process Dictates Products and Outcomes

   With the assumption that schools have “enough” resources, materials, personnel, professional development, and other supports (and, I know, that we always need more). . .

   Why are some schools more positive, productive, and successful than other schools that have the “same” amount of resources and supports?

   To answer this question, I want you to think about the one or two educators who most positively influenced you- - at any point in your preschool through graduate career.

   Was it the smartest teacher?  The teacher who taught your favorite subject?  The teacher whose class contained your best friends?

   Unlikely.

   You are probably thinking about the teachers who were most enthusiastic. . . caring. . . optimistic. . . inspiring. . . and who may have changed your life.

   For me. . . it was my high school music teacher. . . someone who I keep in touch with to this day.

   By way of analogy, my point is:  The most successful schools are the ones that “mix” their available resources with staff enthusiasm, caring, optimism, and inspiration.

   These are the schools where the underlying processes that motivate success, result in the products and outcomes of success.

   Remember, the teams that have the best athletes win championships only when they play as a team.
_ _ _ _ _ _ _ _ _ _

The 7 C’s of Organizational Success

   So. . . what are the underlying processes that help organizations (i.e., districts and schools) to maximize their success?

   These are summarized in the following 7 C’s:

·         Charting the Course
·         Collecting the Supplies
·         Cruising with Purpose
·         Checking Coordinates
·         Correcting for Drift
·         Containing Crises
·         Celebrating the Voyage

Let’s briefly describe each of these components.

#1:  Charting the Course

   Joel Barker said, “Almost all successful individuals and organizations have one thing in common—the power and depth of their vision of the future.”

   This is the essence of strategic planning. Charting the Course focuses on specifying the goals, objectives, and outcomes of your school’s (or district’s, or grade level’s or classroom’s) current or desired journey or “voyage”—whether in the organizational, climate, academic, social-emotional-behavioral, and/or personal/ interpersonal (or combined) areas.

   Critically, and as much as possible, your desired outcomes should be described in specific, behavioral terms so that they are observable and measurable.

   For example, rather than saying:  

   “I want to improve positive school climate this year,”

you might want to specify instead:

   “I want to increase the number and ratio of positive and prosocial to negative and antisocial interactions between students and staff, respectively, based on (a) classroom and common school area observations; (b) incidents reported in the classroom and referred to the office; (c) student, staff, and parent surveys and self-reports; and (d) student, staff, and parent focus group outcomes.

   Relative to Barker’s quote, your goals are your “vision of the future.” Without your goals and vision, there truly is no strategic plan.
_ _ _ _ _

#2:  Collecting the Supplies

   This step focuses on identifying and gathering the needed resources so that your journey has the highest probability of success.  Significantly, many people think only about money as their primary resource.  

   And yet, there are other resources that sometimes are more powerful. For example:

   * Other people- - colleagues, mentors, consultants, or other professionals- - can be resources.  

   * Written, audio-visual, or multi-media information sources- - books, DVDs, web-based trainings or references- - can be resources.

   * Time- - to do research, to engage in training, to devote to self-improvement, to focus tenaciously on a strategic goal- - is an essential resource.

   * Places and facilities- - libraries or other research sites, model or exemplary practice sites, simulation or job-related training sites- - are possible resources.

   * And, finally, technology- - with all of its wondrous innovations and advances- - is a resource.

   The point here is that goal-setting is not enough. If we are under-resourced, we may never build the momentum needed to reach our goals, or we may need to abandon the journey because we run out of provisions. So, part of strategic planning is to “plan for the journey before embarking on the journey.”

   However, relative to this planning, we sometimes need to over-plan and over-resource for the journey. That is, we need to plan not just for the “best-case scenarios,” but also for the “worst-case scenarios.”  Functionally, this means that we sometimes need to have more resources available to help us meet our goals than needed.

   Once again, goals are not successfully attained when challenges are underestimated or when resources are not available to address emergency situations.
_ _ _ _ _

#3:  Cruising with Purpose

   You are able to Cruise with Purpose once you have (a) developed your strategic plan, (b) identified and gathered the resources needed, (c) prepared for potential difficulties, (d) chosen the optimal time to begin, and (e) determined how and when you are going to evaluate your progress.

   With all of this accomplished, you can embark on your journey with direction, determination, confidence, and purpose.

   While all of this sounds natural and easy, many people complete all of the planning and preparation, but never embark on the journey.

   Sometimes this occurs because of a fear of failure, a fear of the unknown, or a fear of taking or being in the lead. Sometimes, it is due to competing priorities, a resistance to change, or the belief that a secure present is better than a challenging future. And sometimes, it is because of a lack of confidence, determination, or motivation.

   Here is where the “strength of purpose” is essential. Critically, while there are no certainties in life, are we truly living life when we are determined to keep everything certain?  

   Inner strength and purpose allows us to conquer our fears. . . it motivates us to make the future our priority, and. . . it inspires us to take the first steps along the path to accomplishment and success.

   Trammell Crow said, “There’s as much risk in doing nothing as in doing something.”

   And so, in order to make planning and preparation meaningful, we must take action. Said another way, once ready, we need to hoist the anchor, engage the rudder, and let out the mainsails- - confident in our ability to take advantage of the good and to adjust to the bad.
_ _ _ _ _

#4:  Checking the Coordinates

   This step is all about “formative evaluation.”

   Formative evaluation involves planned, periodic evaluations that occur at different points in time during the journey to ensure that we are on course and not in need of mid-course corrections.

   Formative evaluation is important because most goals are not accomplished in a direct, straight-line fashion. Typically, progress involves different pathways, requires different levels of energy, and occurs at different speeds. Progress also, at times, requires detours, rest periods, and moments to consolidate the advances made.

   Without formatively “checking the coordinates,” schools, staff, and students sometimes get lost, miss the progress made, or prematurely believe that they have reached their destination.  In addition, psychological research has long shown that when students chart and graph their progress toward long-term goals, both their motivation increases and more of their goals are attained.

   Formative evaluation, then, is the feedback process that all of us need when long-term goals involve a series of short-term steps. If you think about it, most mountains are not climbed by ascending a single steep path to the summit.  Mountains are conquered by patiently negotiating a gradual series of switchbacks that increase the potential for success.

   Similarly, most large bodies of water are navigated by tacking the sailboat back and forth, maximizing the power of the wind to successfully arrive at the desired destination.

   Without formative evaluation, we may not tack at the right time, we may tack too many times, or we may not tack at all.

   William Drayton said, “Change starts when someone sees the next step.”  Drayton understood formative evaluation and the Seven C’s of strategic, organizational planning.
_ _ _ _ _

#5:  Correcting for Drift

  Correcting for Drift involve the actions needed when formative evaluations tell us that we are off-course.

   Let’s face it—life is complex.  

   A few years ago, there was a retirement commercial that began with an older gentleman chiding us, “What did you think—life was an expressway?”

   Clearly not.

   With all the complexities in life (in general and in school), and everything that seems to be bombarding us at the same time, it is easy to get lost in the irrelevant details, the inevitable detours, or the “crises of the day.” At times, all of this causes us to lose our focus and drift from our path.

   And so, using our formative evaluation results, we need to periodically make mid-course corrections to stay on track.

   Think about it this way:  Many of you would be surprised to learn that when a plane travels across the country, it is off-course 90 percent of the time. This is because airplanes travel from one air traffic control center to the next- - at least, until they are within fifty or so miles of their final destination.

   Thus, because the control centers are not aligned with your departure and destination cities, during the flight, the captain, the computers, and the air traffic control centers are constantly programming the plane to make mid-course corrections based on their current formative evaluation data.

   Formative evaluations must be built into and executed as part of the system, school, staff, and student goals in our strategic plans. This helps us to make the necessary mid-course corrections so that we stay on track to reach our goals. Without these corrections, we could get so off course or so lost that our only option would be to give up the journey and start over again.

   The time we spend in periodically evaluating and correcting our progress over time often saves us ten times the time required to restart the process, once again, from the very beginning.
_ _ _ _ _ _ _ _ _ _

A Mid-Course Summary and Set-Up

   The first two of the 7 C’s focus on identifying your district, school, or staff’s strategic goals, designing a functional action plan, and collecting the resources needed to begin executing the plan.

   Steps three through five of the 7 C’s involve the actual implementation of the plan, along with the periodic evaluations needed to ensure that you are progressing toward your goals- - making needed mid-course corrections if you are drifting or getting off-track.

   Step six involves both planning and execution.  It entails the advanced planning that prevents most crises, but the strategic steps needed so that when crises occur, they are quickly addressed. 
   Finally, Step seven emphasizes the importance of enjoying the entire journey- - not just the end of the journey when success occurs.
_ _ _ _ _

#6:  Containing Crises

   This sixth (of the 7) C, Containing Crises, focuses on the planning that prevents crises (as you are working to attain your strategic goals), and the responses that resolve them.

   While we have talked some about prevention, I want to introduce what I call the “NASA Approach to Crisis Prevention.”

   This involves thinking, during the development of a strategic plan, about everything that could possibly go wrong while actually executing the plan, developing an “early warning system” as an alert for potential crises, and then preparing response systems or contingency plans to address any crises that might actually occur.

   The reason why I call this the “NASA Approach” is because this is exactly what NASA does when designing its space ships, and what it is doing now as it conceptualizes its future trips to Mars.

   More specifically, NASA spends an incredible amount of time in development and training in the areas of crisis prevention, intervention, and response.

   For example, as they are designing the space capsules that will travel to Mars, they are building them with what are called “redundant” or “back-up” systems. That is, during the design process, NASA engineers will envision every possible hardware or software system failure or misfortunate that might occur from lift-off to touch-down. Guided by these “worst-case scenarios,” they will build back-up systems into the shuttles- - extra fuel cells, additional computer capacity, by-pass systems and strategies, and emergency procedures for unlikely, but possible, events.

   Crisis prevention is also integrated into every astronaut’s training prior to leaving on a mission. Indeed, beyond preparing for the scientific parts of their mission, astronauts spend a large amount of time on “crisis response” procedures. Once again, after imagining every possible crisis that might occur on the shuttle, NASA conditions the astronauts so that they can respond to any crisis situation at virtually an automatic level. This training and response is essential- - especially when the difference between survival and catastrophe, at times, is counted in seconds, not minutes.

   As a reminder:



   The point here is that schools need to think, as part of their strategic planning, about the potential crises that may affect or completely ruin their potential to succeed. While good planning may actually prevent most crises from happening, planning also results in interventions that are available to contain and minimize crises if they do occur, and responses to repair the damage once they are over.
_ _ _ _ _

#7:  Celebrating the Voyage

   Step number seven, Celebrating the Voyage, focuses on celebrating the fact that (a) we can plan and improve our student, staff, and school outcomes by (b) making incremental progress toward our goals- - succeeding at different stages in the process; and that (c) we should commemorate and celebrate our short- and long-term successes that result in short- and long-term contribution, growth, and achievement.

   This step, then, celebrates the steps during the journey, as well as the journey once the destination has been reached.

   Of the possible areas of celebration, I would suggest that the first one above is the most important.

   Too many times, we focus on “the win,” “the award,” or “the recognition.” And yet, the reality is that we do not always reach our ultimate or long-term goals.

   Given this, we need to refocus our “perceptions of success”- - demonstrating sincere motivation and appreciation for the accomplishment of creating the strategic plan itself, for the care in preparing for the journey, for the thrill of taking the first steps, and for the excitement of experiencing new challenges and opportunities.

   We also need to teach our students this lesson.

   Indeed, when working with parents and teachers, I often remind them that:

“It may take a whole village to raise a child, but it also takes a whole child to raise a village.”

   By this, I mean that we need to help our students, at levels appropriate to their development and maturity, to create strategic school (and life) plans for themselves (at different age levels and across the stages of their lives). Moreover, we need to help them understand that “success” is represented- - as above- - by the journey itself, the short-term accomplishments, and the ultimate or final results.

   As a final step in Celebrating the Voyage, I would like to define “Failure” so that we can contrast it with “Success.”

   I firmly believe that “the only failure . . . is not being able to explain why you have been successful or unsuccessful.”

   To me, then, Failure does not occur when we do not “win,” attain a goal, or accomplish a task.

   Failure occurs when we do not fully understand why we have not succeeded, and when we do not learn from or change the conditions so that we might succeed in the future.

   Conversely, when we are successful, we fail ourselves when we do not determine how that has occurred. Indeed, when we understand how we have succeeded in the past, we can duplicate the effort or conditions so that we can continue to succeed in the future.
_ _ _ _ _ _ _ _ _ _

Summary

   Now that the Elementary and Secondary Education Act (ESEA- - also now known as the “Every Student Succeeds Act”- - ESSA) has been passed and signed into law (last week by Congress and the President, respectively), we know that the educational “landscape” will change.

   In a nutshell, our state departments of education will have more decision-making authority over the criteria and evaluation of school success, and our districts and schools will need to adapt to these new criteria. 

   But significantly, with the new law, there appears to be a change of perspective- - moving from a deficit, failure-focused approach to an asset, success-oriented approach.

   Regardless, the change will give our districts and schools another opportunity to recalibrate, rededicate, and renew their commitments to students’ academic and social, emotional, and behavioral learning, progress, mastery, and application.

   And through it all, the 7 C’s will again take “center stage” in discriminating the schools that are successful, and those that lag behind.

   And so, even as you approach the holidays, recognize that most districts and schools are already planning for the next school year (2016-2017).  Thus, please think about the 7 C’s and how they can help you and your colleagues move to the “next level of excellence.”

·         Charting the Course
·         Collecting the Supplies
·         Cruising with Purpose
·         Checking Coordinates
·         Correcting for Drift
·         Containing Crises
·         Celebrating the Voyage
_ _ _ _ _

   Meanwhile, as you take your well-deserved Winter Break and progress through the Holiday Season, please accept my best wishes for a safe, restful, and joyous time with your families and friends.

   My next Blog message will continue- - in the spirit of New Year’s and New Year’s resolutions- - the discussion above with a focus on the 7 C’s for staff success.

   So be well, and stay tuned.  May the (holiday) force be with you !!!

Best,

Howie