…and a rundown of some of his latest work.
It turns out, according to the professor born in the South Island of New Zealand, that the All Blacks are in fact not the most successful rugby team in history. No, I was reassured unequivocally that this honour lies with that renowned rugby-playing nation… Malta. One of our other lunch partners, another kiwi, pointed out that this obviously isn’t true, unless it is measured in some unconventional way, or excludes tier one nations, or perhaps isn’t about rugby at all.”Well then you’ve just changed the rules!” came John Hattie’s emphatic reply. “Never argue stats with a statistician!”
Since that shared lunch earlier this month, I’ve looked for the proof. I’ve thrown into google all of the search terms of which I can conceive, and I can’t find a single source to back Hattie’s claim up. With no small amount of irony, I’ve sifted through data, wilfully ignoring the overwhelming body of evidence pointing me in the other direction, trying to find just a single glimmer of quantitative data that I can distort to meet my needs. But I’ve come up empty handed. Nothing. Nada. Maybe, as some of his critics would argue, he genuinely isn’t as good at stats as his disciples would have you believe…
But it has to be said that there is something about his latest work, a model that outlines the science of how we learn and how various learning strategies fit into it, that does seem to make a great deal of sense. As mindful as I am that this sounds like straightforward confirmation bias (‘I like it because it feels right’), if it feels like it makes sense and it fits nicely with a wider body of research and literature, then that makes it worth taking a closer look at…
The start of the day.
The story I heard was that some lucky/clever individual at Waldegrave School (home of the Richmond Teaching School Alliance) had somehow ‘won’ John Hattie and his Visible Learning gang for a day. Either way, by virtue of the growing relationship between the schools in neighbouring Kingston and Richmond boroughs, I was there to enjoy John and his team taking a day out of their world tour (no, really) to do their thing in Twickenham.
Though the enjoyment wasn’t immediate.
The morning started with an introduction from Deb Masters (@DebMasters1), one of John’s collaborators in the Visible Learning team, setting the scene for the day. The first activity was designed to make us describe a learning process as we grappled with a task. The takeaway message was supposed to be that even as educational professionals we often don’t have a particularly broad vocabulary for, or understanding of, the process of learning. (I actually thought my colleague and I had a better crack at it than we were given credit for, but I don’t suppose that really changes the key suggestion that seemed to be about the importance of metacognition and developing a language for learning).
The actual activities used in this warm-up were fairly abstract (think aptitude test/ non-verbal reasoning meets back-of-a-newspaper puzzle). I guess it made a point, but as someone who is occasionally sceptical about extrapolating from really narrow, niche, decontextualised research settings to the real world of a particular classroom in a particular school with a particular group of students, it didn’t do a great deal to quell the slight unease that had been set churning during Deb’s opening comments, peppered as they were with glib references to ‘activating’ research for teachers and ‘activating’ learning in the classroom… hmmm.
And then an eye-opening look at what works and when.
What ensued, over the remainder of the day, was an unpacking, primarily by Hattie himself, of aspects of his most recent paper (coauthored with Gregory Donoghue, available here) and a comprehensive mapping of various learning strategies onto a handy model for learning. The ideas will be largely familiar to those who’ve read his previous publications, as will the methodology (meta-analyses and effect sizes), but the format certainly gave me a real moment of clarity, particularly in relation to the importance of thinking about when any particular strategy is likely to be effective… This is something that hasn’t been so explicitly addressed in previous iterations from the Visible Learning juggernaut.
The other thing that struck me was the clarity with which the work is presented. The INSET materials (at 70+ pages, it’s virtually a book on its own) felt significantly more accessible than any of the other three VL books I own (each of which is worth reading, but page-turners they ain’t). The article isn’t too shabby either (though it lacks the graphics!)
The backbone of the model goes something like this:
Those familiar with the Visible Learning work will no doubt recognise the idea of surface > deep > transfer (think SOLO Taoxonomy), and will also presumably be aware of the context in which Hattie sits this sequence:
“It is critical to note that the claim is not that surface knowledge is necessarily bad and that deep knowledge is essentially good. Instead, the claim is that it is important to have the right balance: you need to have surface to have deep; and you need to have surface and deep knowledge and understanding in a context or set of domain knowledge. The process of learning is a journey from ideas to understanding to constructing and onwards. It is a journey of learning, unlearning and overlearning. When students can move from ideas to ideas and then relate and elaborate on them we have learning – and when they can regulate or monitor this journey then they are teachers of their own learning. Regulation, or metacognition, refers to knowledge about on’e own cognitive processes (knowledge) and the monitoring of these processes (skilfulness). It is the development of such skillfulness that is an aim of many learning tasks and developing them is a sense of self-regulation.”
(Hattie, 2009, p. 29)
However, the critical feature of this new body of work is that for each stage of learning, the VL team have identified the specific strategies likely to be most effective. It really does appear very handy. And it throws up some interesting conflict with what I perceive as being quite widely held beliefs about ‘what the research says’. The key message? When you take a body of research about a particular strategy ‘en masse’ it presents a very different picture (read ‘effect size’) to when you go through that same body of research and consider at what stage in the learning process the strategy was being used, and then judge its effectiveness at that stage.
- Using highlighters? Actually quite effective in the acquisition stage for surface learning.
- Spacing, interleaving, testing? Basically good just for consolidation of surface learning.
- Elaborative interrogation? Metacognition? Wait for acquisition of deep learning before you wheel them out.
- Problem-based learning? Inqury learning? All that group-work stuff which gets a bad rep? Actually pretty powerful stuff if you wait until the consolidation of deep learning before you use it.
And this really comes back to trying to understand effect sizes…
On effect sizes.
Hattie started the day with what sounded vaguely like a defence of his use of effect sizes (more on that later) and then made sure his audience were clear that they aren’t to be treated simply as a tick-list of strategies to do. Rather, they should provide a context for thinking about our ‘mindframes’ as educators…
And then he went on…
“All that you need in order to enhance learning is a pulse. Pretty much everything you do as a teacher, works. So don’t ask ‘what works?’, ask ‘what works best for which students and when in the learning process?'”
And, rather topically given recent headlines and the ensuing twitter storm, on the importance of interpreting (rather than just swallowing) effect sizes…
“If some studies have said that homework doesn’t work, it doesn’t mean you get rid of it! It means you improve it!”
And on the importance of continually evaluating our impact on learning…
“Build a coalition around the blue zone [the most positive effect sizes], identify the impact through evidence, then scale it up and invite those in the yellow zone to join you. You have no right to just sit in the yellow zone!“
On this he was unequivocal and, say what you will about the value of effect sizes, surely nobody can disagree with the sentiment that all staff have a moral imperative to continually improve and refine what they do. ‘Say what you will’ about the value of effect sizes…
Say what you will.
I tweeted a quote back in August, whilst reading Dylan Wiliam’s ‘Leadership for Teacher Learning‘:
V.interesting: “…results of meta-analyses [eg by Hattie and Marzano] will be at best meaningless and at worst misleading” @dylanwiliam
— Matt Webber (@LandTMagpie) 8 August 2016
So, when you find yourself sitting opposite John Hattie to enjoy an INSET lunch (happy coincidence, rather than design), what is one to do?
First some small talk. As a school with 1-to-1 iPad deployment, I was keen to hear his thoughts on the role of tech in the classroom . We agreed that the focus should be on the learning methodology rather than the technology per se, following which he offered some insight into research he is currently involved in, looking at the potential of social media in supporting learning. He seemed particularly enthusiastic about the discovery in one particular study that students are apparently asking questions via social media – while in the classroom – that they aren’t asking directly (i.e. the good old fashioned way… with their mouths). It sounds intriguing and, although I’ve not seen the research, the thing I’m most curious about is why these particular students don’t feel they can ask their questions directly… sounds like it could be more about relationships and classroom climate than about technology…
And then I asked him outright…Tactfully, but outright. “You have talked today and written a lot about effect sizes and meta-analyses. On the other hand, your detractors argue that using effect sizes in this way is misleading. What is the average teacher in the middle of it all supposed to think?”
His response, while mopping up the last of his chicken curry with some soggy naan, was delivered with a face that expressed a certain fatigue about the whole thing . “Look, I know Dylan Wiliam has said a whole load of nasty stuff about it in a book… “(I didn’t tell him I’d previously tweeted a quote from said book), “…but it’s a tool. They aren’t perfect, but they are a tool. You don’t stop using a tool just because it isn’t perfect. Dylan Wiliam uses effect sizes himself!”
So, not particularly illuminating and a little touchy perhaps… but, thankfully, it didn’t throw him off his stride for the afternoon session.