The day I had a curry with John Hattie…

…and a rundown of some of his latest work.

It turns out, according to the professor born in the South Island of New Zealand, that the All Blacks are in fact not the most successful rugby team in history. No, I was reassured unequivocally that this honour lies with that renowned rugby-playing nation… Malta. One of our other lunch partners, another kiwi, pointed out that this obviously isn’t true, unless it is measured in some unconventional way, or excludes tier one nations, or perhaps isn’t about rugby at all.”Well then you’ve just changed the rules!” came John Hattie’s emphatic reply. “Never argue stats with a statistician!”

Since that shared lunch earlier this month, I’ve looked for the proof. I’ve thrown into google all of the search terms of which I can conceive, and I can’t find a single source to back Hattie’s claim up. With no small amount of irony, I’ve sifted through data, wilfully ignoring the overwhelming body of evidence pointing me in the other direction, trying to find just a single glimmer of quantitative data that I can distort to meet my needs. But I’ve come up empty handed. Nothing. Nada. Maybe, as some of his critics would argue, he genuinely isn’t as good at stats as his disciples would have you believe…

But it has to be said that there is something about his latest work, a model that outlines the science of how we learn and how various learning strategies fit into it, that does seem to make a great deal of sense. As mindful as I am that this sounds like straightforward confirmation bias (‘I like it because it feels right’), if it feels like it makes sense and it fits nicely with a wider body of research and literature, then that makes it worth taking a closer look at…

The start of the day.

The story I heard was that some lucky/clever individual at Waldegrave School (home of the Richmond Teaching School Alliance) had somehow ‘won’ John Hattie and his Visible Learning gang for a day. Either way, by virtue of the growing relationship between the schools in neighbouring Kingston and Richmond boroughs, I was there to enjoy John and his team taking a day out of their world tour (no, really) to do their thing in Twickenham.

Though the enjoyment wasn’t immediate.

The morning started with an introduction from Deb Masters (@DebMasters1), one of John’s collaborators in the Visible Learning team, setting the scene for the day. The first activity was designed to make us describe a learning process as we grappled with a task. The takeaway message was supposed to be that even as educational professionals we often don’t have a particularly broad vocabulary for, or understanding of, the process of learning. (I actually thought my colleague and I had a better crack at it than we were given credit for, but I don’t suppose that really changes the key suggestion that seemed to be about the importance of metacognition and developing a language for learning).

The actual activities used in this warm-up were fairly abstract (think aptitude test/ non-verbal reasoning meets back-of-a-newspaper puzzle). I guess it made a point, but as someone who is occasionally sceptical about extrapolating from really narrow, niche, decontextualised research settings to the real world of a particular classroom in a particular school with a particular group of students, it didn’t do a great deal to quell the slight unease that had been set churning during Deb’s opening comments, peppered as they were with glib references to ‘activating’ research for teachers and ‘activating’ learning in the classroom… hmmm.

And then an eye-opening look at what works and when.

What ensued, over the remainder of the day, was an unpacking, primarily by Hattie himself, of aspects of his most recent paper (coauthored with Gregory Donoghue, available here) and a comprehensive mapping of various learning strategies onto a handy model for learning. The ideas will be largely familiar to those who’ve read his previous publications, as will the methodology (meta-analyses and effect sizes), but the format certainly gave me a real moment of clarity, particularly in relation to the importance of thinking about when any particular strategy is likely to be effective… This is something that hasn’t been so explicitly addressed in previous iterations from the Visible Learning juggernaut.

The other thing that struck me was the clarity with which the work is presented. The INSET materials (at 70+ pages, it’s virtually a book on its own) felt significantly more accessible than any of the other three VL books I own (each of which is worth reading, but page-turners they ain’t). The article isn’t too shabby either (though it lacks the graphics!)

The backbone of the model goes something like this:


Those familiar with the Visible Learning work will no doubt recognise the idea of surface > deep > transfer (think SOLO Taoxonomy), and will also presumably be aware of the context in which Hattie sits this sequence:

“It is critical to note that the claim is not that surface knowledge is necessarily bad and that deep knowledge is essentially good.  Instead, the claim is that it is important to have the right balance: you need to have surface to have deep; and you need to have surface and deep knowledge and understanding in a context or set of domain knowledge. The process of learning is a journey from ideas to understanding to constructing and onwards. It is a journey of learning, unlearning and overlearning. When students can move from ideas to ideas and then relate and elaborate on them we have learning – and when they can regulate or monitor this journey then they are teachers of their own learning. Regulation, or metacognition, refers to knowledge about on’e own cognitive processes (knowledge) and the monitoring of these processes (skilfulness). It is the development of such skillfulness that is an aim of many learning tasks and developing them is a sense of self-regulation.”

(Hattie, 2009, p. 29)

However, the critical feature of this new body of work is that for each stage of learning, the VL team have identified the specific strategies likely to be most effective. It really does appear very handy. And it throws up some interesting conflict with what I perceive as being quite widely held beliefs about ‘what the research says’. The key message? When you take a body of research about a particular strategy ‘en masse’ it presents a very different picture (read ‘effect size’) to when you go through that same body of research and consider at what stage in the learning process the strategy was being used, and then judge its effectiveness at that stage.

  • Using highlighters? Actually quite effective in the acquisition stage for surface learning.
  • Spacing, interleaving, testing? Basically good just for consolidation of surface learning.
  • Elaborative interrogation? Metacognition? Wait for acquisition of deep learning before you wheel them out.
  • Problem-based learning? Inqury learning? All that group-work stuff which gets a bad rep? Actually pretty powerful stuff if you wait until the consolidation of deep learning before you use it.

And this really comes back to trying to understand effect sizes…

On effect sizes.

Hattie started the day with what sounded vaguely like a defence of his use of effect sizes (more on that later) and then made sure his audience were clear that they aren’t to be treated simply as a tick-list of strategies to do. Rather, they should provide a context for thinking about our ‘mindframes’ as educators…

And then he went on…


“All that you need in order to enhance learning is a pulse. Pretty much everything you do as a teacher, works. So don’t ask ‘what works?’, ask ‘what works best for which students and when in the learning process?'”

And, rather topically given recent headlines and the ensuing twitter storm, on the importance of interpreting (rather than just swallowing) effect sizes…

“If some studies have said that homework doesn’t work, it doesn’t mean you get rid of it! It means you improve it!”

And on the importance of continually evaluating our impact on learning…

“Build a coalition around the blue zone [the most positive effect sizes], identify the impact through evidence, then scale it up and invite those in the yellow zone to join you. You have no right to just sit in the yellow zone!

On this he was unequivocal and, say what you will about the value of effect sizes, surely nobody can disagree with the sentiment that all staff have a moral imperative to continually improve and refine what they do. ‘Say what you will’ about the value of effect sizes…


Say what you will.

I tweeted a quote back in August, whilst reading Dylan Wiliam’s ‘Leadership for Teacher Learning‘:

If you’re new to the debate around the use of effect sizes and meta-analyses of this sort, see here (for some defence) and herehere, here or here for some background to the criticism.

So, when you find yourself sitting opposite John Hattie to enjoy an INSET lunch (happy coincidence, rather than design), what is one to do?

First some small talk. As a school with 1-to-1 iPad deployment, I was keen to hear his thoughts on the role of tech in the classroom . We agreed that the focus should be on the learning methodology rather than the technology per se,  following which he offered some insight into research he is currently involved in, looking at the potential of social media in supporting learning. He seemed particularly enthusiastic about the discovery in one particular study that students are apparently asking questions via social media – while in the classroom – that they aren’t asking directly (i.e. the good old fashioned way… with their mouths). It sounds intriguing and, although I’ve not seen the research, the thing I’m most curious about is why these particular students don’t feel they can ask their questions directly… sounds like it could be more about relationships and classroom climate than about technology…

And then I asked him outright…Tactfully, but outright. “You have talked today and written a lot about effect sizes and meta-analyses. On the other hand, your detractors argue that using effect sizes in this way is misleading. What is the average teacher in the middle of it all supposed to think?”

His response, while mopping up the last of his chicken curry with some soggy naan, was delivered with a face that expressed a certain fatigue about the whole thing . “Look, I know Dylan Wiliam has said a whole load of nasty stuff about it in a book… “(I didn’t tell him I’d previously tweeted a quote from said book), “…but it’s a tool. They aren’t perfect, but they are a tool. You don’t stop using a tool just because it isn’t perfect. Dylan Wiliam uses effect sizes himself!”

So, not particularly illuminating and a little touchy perhaps… but, thankfully, it didn’t throw him off his stride for the afternoon session.


Verbal feedback… from a distance

eLearning eXpress – 18/11/16

This morning’s eLearning eXpress, led by Adam Norton (Year Leader for Yr9 and DT teacher) was a great session to end the week on…

Following on from a great 15 Minute Forum about coded marking, this was another session spent discussing a strategy for providing feedback from a distance (i.e. outside the classroom) in a way that is time-efficient…


We’ve been using Showbie since the very beginning  of our 1-to-1 iPad deployment, though it isn’t exclusively an iPad app – staff and students can (and do, for certain things) use the web platform rather than the iPad. It is absolutely something that could work for you even in the absence of tablet technology. It is one of several platforms for collecting student work and providing feedback, instigating learning dialogue, signposting students to resources and/or links, and connecting with them more generally. It is used widely across the school: students like it, staff like it and, when used in certain ways, can be a significant time-saver.

We’ve talked (and I’ve blogged) about laborious approaches to providing written feedback to students, and we’re taking strides towards striking a sensible balance. As we’ve moved along this journey, Adam said himself that he’s tried a few different approaches, all of which have left him tired, bored and looking for another way!

And so the teachers in our DT team, like some others around school, have been experimenting with recording verbal feedback rather than writing it out.

Adam talked us through the steps in how you actually do it (see the slideshow for the walk-through)…

This slideshow requires JavaScript.

Amongst the hints and tips that Adam shared was the idea that after you’ve recorded your feedback, you can listen back to it before deciding whether to commit it to the students assignment. This is quite handy in itself in terms of being able to ‘moderate’ your own marking, for example in those situation where you’ve marked a few more pieces and decide you need to go back and adjust an earlier feedback comment in light of looking at other pieces.

It is also possible to record multiple notes for separate pieces of work, or if you miss something out (there is no need to re-record the whole note!) It also opens the possibility for students to record their own responses and create an actual dialogue, if that is necessary.


The benefits of being able to provide verbal feedback rather than having to write it are many. Key for me is that you are still able to provide a personal response to each individual student, but in a less laborious and time-consuming way (and, as Adam highlighted, not having a stack of books to deal with is also a psychological bonus!)

Being able to sit with the mark scheme and work in front of you and just talk through it without flitting back and forth with a pen and feedback sheet also feels less like you are flitting back and forth between things – just talk it through as you look at it. Striking a conversational approach to feeding back on the work also allows students to hear the tone, inflection, nuance etc of your voice – suddenly comments that seem stark or harsh on paper can be delivered in a more gentle manner without diluting it.

7There are some potential challenges that need to be considered, though none of them seem insurmountable. As Adam said,

“I can’t mark in front of the telly any more, but I can do it lying down on my bed!”

Let’s face it, nobody likes the sound of their own voice, and this can be an awkward part of listening back to the feedback – or, worse, having students listen back to your feedback… all of them… at the same time! That needs thinking about! But the sound of your own voice is something you’ll get over,

“I’ve come to terms with the fact that this is actually how I sound!”

You’ll need a quiet place to record your feedback (Adam has been known to hideout in the sound-proofed music practice rooms!), especially if you’re recording lengthy voice notes – you don’t want too much background noise, and nor do you want interruptions mid-flow!

At the end of the session, there was some discussion about whether voice notes are quite as helpful when it comes to identifying specific mistakes, in relation to SPaG for example. One suggestion was to separate out the identification of mistakes in the good old fashioned way, and sticking to the developmental feedback in the voice note. However, I rather think that the correction of SPaG can still be dealt with in a potentially powerful way, by simply stating in the voice note, for example, “there is a spelling error in the first paragraph – find it and fix it!”. Such an approach would fit nicely with the message we are trying to push that feedback should be more work for the student than for the teacher! Worth a try…

Once the students get past the novelty (which won’t take long), they seem to like it. Though I was reminded of the comment relayed to me by Ceris Owen, our Subject Leader for DT, made by one of her A Level students following an early foray into providing verbal feedback on Showbie: “Miss, thanks for the feedback. Can I just say though, it was a bit weird hearing your voice in my bedroom!”… LoL, as the kids might say…

A lesson from growing All Blacks…

I tend to steer clear of the hackneyed and often over-reaching use of sporting analogy in talking about leadership and the work we do in schools more generally. That’s not to say there aren’t some important parallels and some very useful lessons to draw from the world of sport, but I’m happy to let someone else do the job of writing another clichéd rehash of the importance of ‘marginal gains’ (impressive as Mr Brailsford is), or another trite treatise about the importance of the hours spent on the training field (much as I agree, generally, with the sentiment).

I felt a little weakening in my resolve during the Rugby World Cup in September of last year, when I read this BBC article on life inside the All Blacks. I even started a draft post…. but I resisted the urge (or the moment was lost… whatever). That said, it really is a worthwhile read about the importance of initiating people into a high-performing team, maintaining a culture of high expectations, striving for continual development (and if you want something more substantial but in a similar vein, look no further than the brilliant ‘Legacy’ by James Kerr).

And then, a couple of weeks ago, this video of Buck Anderson, New Zealand Rugby Union’s Youth Development Officer, appeared in my news feed…

If you enjoy it, you’ll stick at it

A colleague and good friend of mine, part of our sixth form team, takes every opportunity he can to share with staff, students and parents his mantra that if students enjoy school, they will succeed at school. If they look forward to coming in each day and throw themselves into the opportunities – and challenges – they are faced with, then they are more likely to get a return and are more likely to succeed in the long run.

Interesting then, to listen to Buck talking about the programme that provides one of the most successful sporting teams of all time with its young blood. The focus of the youth development programme seems not to be on encouraging within every young rugby player a burning aspiration to be an All Black. The focus is not even necessarily on winning. The focus, borne out of listening to what the young people themselves have said, is on engagement with the sport and enjoyment of the experience: get stuck in.

“They are not actually, when you talk to them, thinking they’re going to be All Blacks. You constantly hear this “it’s every kiwi kids dream of being an All Black”… but the dream of being an All Black is an adult’s view… The kids want to have fun.

“We try to create that environment and drill down further into what…they mean by ‘fun’, and what is ‘enjoyable’? So, meaningful competitions, really good skill development, coaches who are going to make them better. And above all not taking it all too seriously. If the kids have got that, they’re happy. If they’re happy, they’ll continue to play and enjoy it.”

Meaningful competition. Not just lots of easy victories, but meaningful challenge for all.

Really good skill development and coaches who are going to make them better. Expert input, expert modelling, expert feedback. A clear picture of what success will look like and the strategy and tools to get them there.

And don’t write off the lower attainers before they’ve had chance to bloom.

“give these kids good coaching, give them a fun environment, give them a good competition, and we’ll keep them in the game for longer”

How often do we try to get students to focus on the end goal (which may or may not even be a goal that they share with us)?

How often does “you need to work hard because you need the GCSE grade” actually work as a motivation? For some, yes. For all? No. The better starting place is surely to inspire a passion for learning and for the subject we are teaching, engaging them (not with gimmick but with our own passion) with challenging learning experiences and meaningful successes and through nurturing a love for learning. After all, the jury seems to be pretty much ‘in’ on the idea that motivation results primarily from achievement, it doesn’t precede it.

Some lessons worth bearing in mind…

Photo Credit: KiwiMunted Flickr via Compfight cc

Coded marking…

#15MinForum – 4/11/16

Our latest 15 Minute Forum was led by Lucy McDonald on the topic of coded marking. This is an idea that Lucy first started exploring as a pgce student and which she continued to develop last year when she joined us as an NQT. It is certainly a nicely refined approach to marking that sits nicely with the message we are trying to push around being as efficient as possible with written feedback (see some thoughts here on the topic of trying to ‘Mark less but mark better’).

At the heart of the approach that Lucy adopts when it comes to providing written feedback on assessments, tests, key pieces of book work etc, is the principle that any sort of feedback should require more effort on the part of the student than the teacher. This is an idea that a number of staff are exploring in a variety of ways as part of the work of our Learning Communities this year, and sits nicely in our drive to make sure students are being made to think hard as part of their learning.

Lucy’s full presentation can be seen below, but the steps are as follows:

  1. Rather than providing each student with individual written comments on their work, create a numbered list of feedback points (this could be done in advance or built up while looking at student’s work). Then, as you go through each students work, simply write down (in the margin or at the end of the work) whichever numbers relate to the appropriate feedback points for this student. Depending on the nature of the work being assessed, a student may be given several numbers, or just a couple.
  2. Students are then provided with the full list of feedback points (on the projector or via showbie) and are then given time to write out the feedback points that the teacher has indicated to be most relevant to them. By the end of this stage, every student has a number of detailed feedback points for their work, but written out by them rather than by the teacher. As well as being a significant time-saver for the teacher, there is also the added benefit that in getting the students to write the comments out themselves, they are forced to engage with that comment.
  3. The final stage – and for me the big development on similar models that I’ve seen elsewhere – is that each of the numbered feedback points also correspends to a DIRT (Directed Improvement and Reflection Time) task. Again, students write these out… and then do them! This means that not only have the students been forced to write, read and consider the feedback points, but they are also then given an improvement task that relates directly to whatever their feedback was.


It might not work for every task on which a teacher is wanting to provide feedback after the task has been done (as opposed to feedback in the classroom to shape the work as it is being done), but there is a lot of scope here for increasing the return on our investment of time.

Other examples of coded marking would include providing literacy codes or letters which highlight where something is missing or underdeveloped. As long as the students know what the codes mean and how the system works, it would seem that any sort of coding could be very valuable! I think there is also scope to link this sort of approach to one where feedback is given, coded or otherwise, but the student then has the job of locating the section of the work to which the feedback refers. For example, on simple recall questions, why not tell a student they have got X number correct and Y number wrong, and then let them try and identify which ones were right and which were wrong…

For more suggestions of ways to reduce the time spent by teacher’s providing lengthy written feedback, this article from @joe_kirby has some thought-provoking suggestions!


See Lucy’s full presentation here…

This slideshow requires JavaScript.