Critical Questions to Ask your “Hattie Consultant”
Before You Sign the Contract
Dear Colleagues,
Introduction
Over the past five
years especially, John Hattie has become internationally-known for his
meta-analytic research into the variables that most-predict students’ academic
achievement (as well as other school-related outcomes).
Actually, his
different Visible Learning books (which have now generated a
“Hattie-explosion” of presentations, workshops, institutes, and “certified”
Hattie consultants) report the results of his “meta-meta-analyses.” These are analyses that average the effect
sizes from many other meta-analyses that themselves have pooled research
that investigated the effect of one psychoeducational variable, strategy,
intervention, or approach on student achievement.
Let me be clear
from the start. This Blog is not
to criticize or denigrate, in any way, Hattie on a personal or professional
level. He is a prolific researcher and
writer, and his work is quite impressive.
However, this Blog does
critique the statistical and methodological underpinnings of meta- and
meta-meta-analytic research, discuss its strengths and limitations, but most
essentially, delineate its research-to-practice implications and
implementation necessities.
It also reviews a
recent critique by Dr. Robert Slavin, the primary developer of Success for
All, an evidence-based literacy (and math) program that is one of the
longest-standing, best-researched, and most-effective instructional approaches
in recent history.
On June 21, 2018,
Slavin published a Blog, John Hattie is Wrong, where he reported his
analyses of and concerns about Hattie’s research and conclusions.
Peter DeWitt, a
Hattie colleague and trainer, then responded to Slavin in a June 26, 2018 Education
Week article, John Hattie Isn’t Wrong. You Are Misusing His
Research. This Blog included quotes
from Hattie in response to Slavin.
And then, Slavin
responded back in an Education Week Letter to the Editor published on
July 17, 2018.
_ _ _ _ _
My current Blog
quotes from all three of these pieces so that the issues surrounding
meta-analyses and meta-meta-analyses are clearly delineated.
In this end, it
is important that educators understand:
· The strengths and limitations of meta-analytic
research—as well as meta-meta-analytic research;
·
What conclusions can be drawn from the results
of sound meta-analytic research;
·
How to transfer sound meta-analytic research
into actual school- and classroom-based instruction or practice; and
·
How to decide if an effective practice in one
school, classroom, or teacher is “right” for your school, classrooms, and
teachers.
But there is more. . .
_ _ _ _ _ _ _ _ _
My Addition: The Implementation Methods are Missing
The second half of
this Blog identifies Hattie’s current (June, 2018) “Top Twenty” approaches
showing the strongest meta-meta-analytic effects on student learning and
achievement.
Significantly, many
of these “Top 20” approaches are different than when I wrote a previous Blog
message regarding Hattie’s work on September 25, 2017 Blog—less than 10 months
ago. Second, there are new approaches on
the list that have never previously been cited. . . and other approaches have
new labels.
Finally, and most
importantly, many of the approaches have such generic or global names that it
is virtually impossible to determine, at a functional school, classroom,
teacher, or student level, what methods and implementation steps were used
in the research, and what methods and steps should be used, right now, by a
district or school interested in implementation.
This
science-to-practice implementation issue is what none of the researchers are really
talking about.
And, with the
ever-changing list of Top Twenty effects appearing to be a moving target,
districts and schools have an additional dilemma of trying to keep up with and
accurately interpreting the research.
_ _ _ _ _
The remainder of
the Blog discusses a step-by-step approach that districts and schools need to
take to translate Hattie’s research into effective and meaningful practice.
The critical
premise here is that—just because we know from meta-analytic research that a
program, strategy, or intervention significantly impacts student learning—we do
not necessarily know the implementation steps that were in the research studies
used to calculate the significant effect . . . and we cannot assume that
all or most of the studies used the same implementation steps, or that these
steps are the “right” ones for a specific district or school.
Two “Top Twenty”
examples (“Response to Intervention” and “Interventions for Students with
Learning Needs”) are used to demonstrate how these steps could be used.
_ _ _ _ _ _ _ _ _ _
The Questions to
Ask the Outside “Hattie Consultants”
The Blog closes by
identifying Five Questions that districts and schools need to ask outside
“Hattie consultants”—who are now making themselves available to help “implement
Hattie’s research and work”—BEFORE they sign the contract.
As Hattie’s work
has become more and more popular, we now have a “cottage industry” of “official
and unofficial” Hattie consultants who are available to assist.
With no disrespect
intended, just because someone has been trained by Hattie, has heard Hattie, or
has read Hattie—that does not give them the expertise, across all of the 138
(or more) rank-ordered influences on student learning and achievement, to
analyze and implement any of the approaches identified through Hattie’s
research.
And so, the
questions that districts and schools need to ask when consultants say that
their consultation is guided by Hattie’s research are detailed.
_ _ _ _ _ _ _ _ _ _
Summary
Once again, none
of the points expressed in this Blog are about John Hattie. Hattie has made many astounding contributions
to our understanding of the research in areas that impact student learning and
the school and schooling process.
However, many of my
points relate to the strengths, limitations, and effective use of research
reports using meta-analysis and meta-meta-analyses. If we are going to translate this research to
sound practices that impact student outcomes, educational leaders need to objectively
and successfully understand, analyze, and apply the research so that they make sound
system, school, staff, and student-level decisions.
And if the
educational leaders are going to use other staff or outside consultants to
guide the process, they must ask the questions and get the answers to
ensure that these professionals have the knowledge, skills, and experience to
accomplish the work.
In the end, schools
and districts should not invest time, money, professional development,
supervision, or other resources in programs that have not been fully validated
for use with their students and/or staff.
Such investments
are not fair to anyone—especially when they become (unintentionally) counterproductive
by (a) not delivering the needed results, (b) leaving students further behind,
and/or (c) creating staff resistance to “the next program”—which might,
parenthetically, be the “right” program.
_ _ _ _ _
I hope that this
discussion has been useful to you.
As always, I look
forward to your comments. . . whether on-line or via e-mail.
For those of you
still on vacation, I hope that you have been enjoying the time off.
If I can help you
in any of the areas discussed in this Blog, I am always happy to provide a
free one-hour consultation conference call to help you clarify your needs
and directions on behalf of your students, staff, school(s), and district.
Best,
Howie
[For the Entire Blog Message: CLICK HERE]
No comments:
Post a Comment