Mixing it up…

#15MinForum – 23/02/17

Each time our Learning Communities meet (which is once per half-term, when we have a late start for students in order to buy some time for CPDL), I have the privilege of being able to wander from group to group under the pretence of ‘offering support’ (when in reality most of what I do is marvel at the richness and depth of discussion our staff are engaging in!)

A few weeks ago I walked in on a conversation taking place in one of the groups who have had the brilliant ‘Make it Stick‘ as their core reading, to hear our Director of Music, Jonny Bridges, explaining the analogy he uses to illustrate for his students the interleaving approach he is using with them… Fittingly for a music tech teacher, it involves a mixing desk…

Photo Credit: K. McMahon Flickr via Compfight cc

As I sat listening, gripped by his analogy, I scribbled a myself a reminder: ‘Jonny. 15 Minute Forum. Interleaving.”


Mixing it up

As you can see in Jonny’s prezi (here),  he started by setting out how interleaving contrasts with the way most of our subjects work through their schemes of learning. Specifically, we tend to teach in successive units of work that are often then not returned to until the end of the year. When following an interleaved approach, the curriculum is weaved in on itself, so that rather than drilling down into one unit at a time before moving onto the next (i.e. AAAA, BBBB, CCCC), the teacher instead moves between topics introducing things one layer at a time (i.e. ABC, BAC, ACB, CBA). This sits nicely alongside – and is indeed inextricably linked to – the idea of spacing vs massing practice (there have been some nice ideas shared in relation to this at previous 15 Minute Forums/ eLearning eXpress meetings, like this one from Gabby Veglio and this one from Annis Araim).

Jonny highlighted that students (and teachers!) may feel frustrated by the fact that they don’t work on a single topic for extended periods of time (as they would in a ‘massed practice’ approach) and therefore they don’t develop the short-term fluency (which is, in truth, illusory!), and instead their sense of mastery is something that develops over a longer period of time as the topic is returned to repeatedly.

And this is where the mixing desk analogy comes in…

mixing it up

This distinction between short-term artefacts from the classroom (‘performance’) and the longer term, deeper conceptual growth (which we might more justifiably call ‘learning’) is an important one, especially as the latter is the one we want and yet a lot of what we are geared-up for is the former…


Learning vs Performance

This article, from Bjork and colleague, is a must-read if you’re interested in exploring more about the learning vs performance distinction, as is this blog from David Didau (@learningspy), which seats interleaving within the wider point about desirable difficulties…

Elsewhere, this guest post on the fantastic ‘Learning Scientists‘ website is worth a few minutes of your time…


Read more about Jonny’s own experiences and successes in his prezi, here



Testing is your friend…

#15MinForum – 12/01/17

imagesThe latest 15 Minute Forum was the first one to be based directly on the work being done in our Learning Communities. Based on some of the research presented in the excellent ‘Make it Stick‘ (the core reading for three of separate Learning Communities), Gabby Veglio (@MissVeglio, one of our Year Leaders and numeracy coordinator) led a session about the benefits of frequent, low-stakes testing…

Quiz Quiz Quiz

So what’s the gist? Gabby started with a bit of the background theory to the idea of frequently testing the students through low-stakes quizzing (or getting them in the habit of doing it themselves).


All good so far. But for many of us, perhaps the most striking suggestions come when thinking about how to get the most out of an approach which acknowledges the value of testing (though the first thing you might want to do is call it ‘quizzing’ or ‘retrieval practice’, or at least educate staff and students about the fact that ‘testing’ doesn’t have to mean high-stakes tests in the form exams or summative assessments!)


This starts to touch on the ideas relating to the importance of leaving time to forget things and recognising that forgetting (and having to think hard to retrieve) is a good thing when it comes to learning! With that in mind, we are then into the realms of thinking about spacing and interleaving…

Gabby then shared some of the examples that she uses in her own practice to try and implement the principles set out before…

Hungry for more?

Go here to read more about applying cognitive psychology to enhance education practice – it really is a one-stop shop in terms of thinking about retrieval practice and all the associated considerations.

This article from Sodersorm and Bjork is an interesting (though relatively heavy) read for anyone interested in looking at how findings in cognitive science can be applied to the classroom, particularly on the point of the difference between learning and performance.

As always, there has already been a great 15 Minute Forum by the guys over at ClassTeaching on the topics of spacing and interleaving.

The slideshow of Gabby’s full presentation can be seen below.

This slideshow requires JavaScript.

The day I had a curry with John Hattie…

…and a rundown of some of his latest work.

It turns out, according to the professor born in the South Island of New Zealand, that the All Blacks are in fact not the most successful rugby team in history. No, I was reassured unequivocally that this honour lies with that renowned rugby-playing nation… Malta. One of our other lunch partners, another kiwi, pointed out that this obviously isn’t true, unless it is measured in some unconventional way, or excludes tier one nations, or perhaps isn’t about rugby at all.”Well then you’ve just changed the rules!” came John Hattie’s emphatic reply. “Never argue stats with a statistician!”

Since that shared lunch earlier this month, I’ve looked for the proof. I’ve thrown into google all of the search terms of which I can conceive, and I can’t find a single source to back Hattie’s claim up. With no small amount of irony, I’ve sifted through data, wilfully ignoring the overwhelming body of evidence pointing me in the other direction, trying to find just a single glimmer of quantitative data that I can distort to meet my needs. But I’ve come up empty handed. Nothing. Nada. Maybe, as some of his critics would argue, he genuinely isn’t as good at stats as his disciples would have you believe…

But it has to be said that there is something about his latest work, a model that outlines the science of how we learn and how various learning strategies fit into it, that does seem to make a great deal of sense. As mindful as I am that this sounds like straightforward confirmation bias (‘I like it because it feels right’), if it feels like it makes sense and it fits nicely with a wider body of research and literature, then that makes it worth taking a closer look at…

The start of the day.

The story I heard was that some lucky/clever individual at Waldegrave School (home of the Richmond Teaching School Alliance) had somehow ‘won’ John Hattie and his Visible Learning gang for a day. Either way, by virtue of the growing relationship between the schools in neighbouring Kingston and Richmond boroughs, I was there to enjoy John and his team taking a day out of their world tour (no, really) to do their thing in Twickenham.

Though the enjoyment wasn’t immediate.

The morning started with an introduction from Deb Masters (@DebMasters1), one of John’s collaborators in the Visible Learning team, setting the scene for the day. The first activity was designed to make us describe a learning process as we grappled with a task. The takeaway message was supposed to be that even as educational professionals we often don’t have a particularly broad vocabulary for, or understanding of, the process of learning. (I actually thought my colleague and I had a better crack at it than we were given credit for, but I don’t suppose that really changes the key suggestion that seemed to be about the importance of metacognition and developing a language for learning).

The actual activities used in this warm-up were fairly abstract (think aptitude test/ non-verbal reasoning meets back-of-a-newspaper puzzle). I guess it made a point, but as someone who is occasionally sceptical about extrapolating from really narrow, niche, decontextualised research settings to the real world of a particular classroom in a particular school with a particular group of students, it didn’t do a great deal to quell the slight unease that had been set churning during Deb’s opening comments, peppered as they were with glib references to ‘activating’ research for teachers and ‘activating’ learning in the classroom… hmmm.

And then an eye-opening look at what works and when.

What ensued, over the remainder of the day, was an unpacking, primarily by Hattie himself, of aspects of his most recent paper (coauthored with Gregory Donoghue, available here) and a comprehensive mapping of various learning strategies onto a handy model for learning. The ideas will be largely familiar to those who’ve read his previous publications, as will the methodology (meta-analyses and effect sizes), but the format certainly gave me a real moment of clarity, particularly in relation to the importance of thinking about when any particular strategy is likely to be effective… This is something that hasn’t been so explicitly addressed in previous iterations from the Visible Learning juggernaut.

The other thing that struck me was the clarity with which the work is presented. The INSET materials (at 70+ pages, it’s virtually a book on its own) felt significantly more accessible than any of the other three VL books I own (each of which is worth reading, but page-turners they ain’t). The article isn’t too shabby either (though it lacks the graphics!)

The backbone of the model goes something like this:


Those familiar with the Visible Learning work will no doubt recognise the idea of surface > deep > transfer (think SOLO Taoxonomy), and will also presumably be aware of the context in which Hattie sits this sequence:

“It is critical to note that the claim is not that surface knowledge is necessarily bad and that deep knowledge is essentially good.  Instead, the claim is that it is important to have the right balance: you need to have surface to have deep; and you need to have surface and deep knowledge and understanding in a context or set of domain knowledge. The process of learning is a journey from ideas to understanding to constructing and onwards. It is a journey of learning, unlearning and overlearning. When students can move from ideas to ideas and then relate and elaborate on them we have learning – and when they can regulate or monitor this journey then they are teachers of their own learning. Regulation, or metacognition, refers to knowledge about on’e own cognitive processes (knowledge) and the monitoring of these processes (skilfulness). It is the development of such skillfulness that is an aim of many learning tasks and developing them is a sense of self-regulation.”

(Hattie, 2009, p. 29)

However, the critical feature of this new body of work is that for each stage of learning, the VL team have identified the specific strategies likely to be most effective. It really does appear very handy. And it throws up some interesting conflict with what I perceive as being quite widely held beliefs about ‘what the research says’. The key message? When you take a body of research about a particular strategy ‘en masse’ it presents a very different picture (read ‘effect size’) to when you go through that same body of research and consider at what stage in the learning process the strategy was being used, and then judge its effectiveness at that stage.

  • Using highlighters? Actually quite effective in the acquisition stage for surface learning.
  • Spacing, interleaving, testing? Basically good just for consolidation of surface learning.
  • Elaborative interrogation? Metacognition? Wait for acquisition of deep learning before you wheel them out.
  • Problem-based learning? Inqury learning? All that group-work stuff which gets a bad rep? Actually pretty powerful stuff if you wait until the consolidation of deep learning before you use it.

And this really comes back to trying to understand effect sizes…

On effect sizes.

Hattie started the day with what sounded vaguely like a defence of his use of effect sizes (more on that later) and then made sure his audience were clear that they aren’t to be treated simply as a tick-list of strategies to do. Rather, they should provide a context for thinking about our ‘mindframes’ as educators…

And then he went on…


“All that you need in order to enhance learning is a pulse. Pretty much everything you do as a teacher, works. So don’t ask ‘what works?’, ask ‘what works best for which students and when in the learning process?'”

And, rather topically given recent headlines and the ensuing twitter storm, on the importance of interpreting (rather than just swallowing) effect sizes…

“If some studies have said that homework doesn’t work, it doesn’t mean you get rid of it! It means you improve it!”

And on the importance of continually evaluating our impact on learning…

“Build a coalition around the blue zone [the most positive effect sizes], identify the impact through evidence, then scale it up and invite those in the yellow zone to join you. You have no right to just sit in the yellow zone!

On this he was unequivocal and, say what you will about the value of effect sizes, surely nobody can disagree with the sentiment that all staff have a moral imperative to continually improve and refine what they do. ‘Say what you will’ about the value of effect sizes…


Say what you will.

I tweeted a quote back in August, whilst reading Dylan Wiliam’s ‘Leadership for Teacher Learning‘:

If you’re new to the debate around the use of effect sizes and meta-analyses of this sort, see here (for some defence) and herehere, here or here for some background to the criticism.

So, when you find yourself sitting opposite John Hattie to enjoy an INSET lunch (happy coincidence, rather than design), what is one to do?

First some small talk. As a school with 1-to-1 iPad deployment, I was keen to hear his thoughts on the role of tech in the classroom . We agreed that the focus should be on the learning methodology rather than the technology per se,  following which he offered some insight into research he is currently involved in, looking at the potential of social media in supporting learning. He seemed particularly enthusiastic about the discovery in one particular study that students are apparently asking questions via social media – while in the classroom – that they aren’t asking directly (i.e. the good old fashioned way… with their mouths). It sounds intriguing and, although I’ve not seen the research, the thing I’m most curious about is why these particular students don’t feel they can ask their questions directly… sounds like it could be more about relationships and classroom climate than about technology…

And then I asked him outright…Tactfully, but outright. “You have talked today and written a lot about effect sizes and meta-analyses. On the other hand, your detractors argue that using effect sizes in this way is misleading. What is the average teacher in the middle of it all supposed to think?”

His response, while mopping up the last of his chicken curry with some soggy naan, was delivered with a face that expressed a certain fatigue about the whole thing . “Look, I know Dylan Wiliam has said a whole load of nasty stuff about it in a book… “(I didn’t tell him I’d previously tweeted a quote from said book), “…but it’s a tool. They aren’t perfect, but they are a tool. You don’t stop using a tool just because it isn’t perfect. Dylan Wiliam uses effect sizes himself!”

So, not particularly illuminating and a little touchy perhaps… but, thankfully, it didn’t throw him off his stride for the afternoon session.


What doesn’t work?

#15MinForum – 07/10/16

At this week’s 15 Minute Forum we took a look at the Sutton Trust’s report, ‘What Makes Great Teaching?‘. The report itself is a couple of years old now, but well worth a read if you missed it first time around…


The exciting work that our Learning Communities are doing revolves around developing expertise through cycles of inquiry. In doing so, our teachers are drawing on stimulus material across a range of areas to identify practices which are identified as being particularly effective, then looking at how to adapt and refine these ideas in the context of their own classrooms. Many of the ideas about what works that feature in this report are therefore ideas that are being explored across the school already.

Our message to staff this year about investing in their own professional learning is underpinned by Dylan Wiliam’s mantra, ‘sometimes we have to stop doing good things in order to do even better things’. With this in mind, we used the 15 Minute Forum to take a look at the things that are highlighted as being ineffective – strategies or approaches for which the evidence base is weak, or which are simply regarded as being less efficient or effective than other alternatives.

What are the areas identified as ‘ineffective’?

  • Using praise lavishly.
  • Allowing learners to discover key ideas for themselves.
  • Grouping learners by ability (both in terms of allocating students to different teaching groups and in terms of within-class grouping).
  • Encouraging re-reading and highlighting to memorise key ideas.
  • Trying to address issues of confidence and low aspirations before trying to teach content.
  • Presenting information to learners in their preferred learning style.
  • Ensuring learners are always active, rather than listening passively, if you want them to remember.

Read the report for more! (the ineffective practices start on p22, but the whole article is well worth 10 minutes of your time)

For more research into what works, this post from Professor Rob Coe is a great starting point…


Planning for growth…

Earlier this month I shared a quote with staff, hinting at the direction we are heading with our professional development and learning programme. In their excellent book, Professional Capital (2012), Andy Hargreaves (@HargreavesBC) and Michael Fullan (@MichaelFullan1) state

What is needed is a profession that constantly and collectively builds its knowledge base and corresponding expertise, where practices and their impact are transparently tested, developed circulated and adapted. There needs to be a continuous amalgamation of precision and innovation, as well as inquiry, improvisation and experimentation.


Over the last few months I’ve tried to synthesise their work and that of many others (which I’ve blogged about recently), while also reflecting on the successes and challenges we’ve faced in trying to engage staff with learning projects, which culminated in this week’s ‘Celebration of Inquiry’. Today I shared with staff the fruits of that lengthy process with the launch of the themes for next year’s Learning Communities… and I’m very excited…LC

The work of the Learning Communities will form the backbone of our professional development and learning programme next year. We’ve bought staff some time by committing to the introduction of a fixed professional learning slot where once a half-term, students will have a late start to facilitate the meeting of the Learning Communities (we toyed with making it a fixed weekly arrangement to open up a whole host of other development opportunities, but decided to take one step at a time).

Staff have now been asked to read the blurb for each of the 10 themes and identify which Learning Community they would most like to become part of for the year. Although there is some overlap between some of the themes, the overall scope is intentionally broad in order not only to cater to a wide range of personal development and learning interests, but also to ensure that the work of the Learning Communities supports a range of strategic development priorities (i.e. priorities based on what we see as trends from observations etc, as well as priorities relating to the implemention of new courses with increasing challenge and linear assessment).

Within each Learning Community, staff will be supported with identifying and articulating specific inquiry questions to explore. It is likely that a range of different inquiry questions will be explored within each community, but these questions will nest within the group’s theme, allowing for individual direction but maintaining internal alignment of the community.

The focus of each Learning Community is underpinned by a piece of ‘essential reading’. For some of the communities, this consists of carefully selected articles and online resources (blogs, videos etc), while for other groups it is a carefully selected book. Once staff have indicated which community they would like to join, we buy the books! This key reading will be a stimulus for the professional learning of the group: books (or articles and web links where this constitutes the reading material) will be distributed before the start of our nice long summer for staff to read before the first meeting mid-Septemer! Individuals and groups will then derive specific inquiry questions to develop their own practice based on the learning stimulated by their reading and the discussion that ensues.

Each meeting of the Learning Communities will provide opportunity for each member to share their progress, engage in some new learning/ reflection on the reading or other stimulus, and plan for next steps. These meetings will be supported through the use of twilight INSET opportunities to develop all staff as coaches and as observers, as well as supporting staff with the devleopment of research skills (eg framing inquiry questions and measuring impact through soft or hard data, or using peer observation (which will run to the agenda of the observed teacher) or student observation).

Over the course of the year, the intention is to draw upon the developing expertise within these communities and use it to support staff beyond those communities as well. The work will end in a ‘Celebration of Inquiry’ event – it’ll be like the one we had earlier this week, but on a larger scale… I envisage all staff will be involved in sharing their learning from across the year…

I for one can’t wait!

CPD, Learning Communities and research…

This year we dabbled with having all staff working in groups on Professional Learning Projects – we’re gearing up to celebrate the impact that these have had at our INSET day later this term. The idea, looking ahead, is to move towards a staff development model brings us closer to long-term, collaborative teacher learning groups: giving staff time and space to work together using an approach that is rooted in enquiry and reflection, informed by research and reading (taking us away from having ‘led’ sessions as the backbone, where an ‘expert’ tells everyone lots of good ideas)…

As part of the review and planning, I’ve invested considerable time in reading and researching what other leading schools are doing, and looking at how this nests within the research and evidence base. As part of that process, I thought I would assemble some of the high-quality literature that has been invaluable for me over the last few months that is informing the exciting plans for 2016-17 (and beyond) to serve as a platform for others…

young business people group have meeting and working in modern b
Photo Credit: First Joker via Compfight cc

A more detailed overview of our model will follow once we’ve pinned down the details (and done some magpie-ing from other schools leading the way!), but here is a sample from a much bigger body of reading that is informing our plans for professional learning…


The (general) research on Professional Development.

The Teacher Development Trust’s (@TeacherDevTrust) Developing Great Teaching is a great starting point for looking at what the research suggests works and what doesn’t.

The Centre for ths Use of Research Evidence in Education (CUREE, @Curee_official) have produced an equally accesible introduction to the research around teacher development in their report, Understanding What Enables High Quality Professional Learning (I particularly like the distinction in thinking about ‘professional development’ and ‘professional learning’). Equally, The Sutton Trust’s (@suttontrust) report on Developing Teachers contains some useful suggestions and insight to get the cogs turning.

I can’t pretend to have read the whole thing, but I keep telling myself that at some point I will work through the full text of Helen Timperley’s ENORMOUS best evidence synthesis on Teacher Professional Learning and Development. However, this summary of Timperley’s work by Mike Bell over at the Evidence Based Teachers Network is an easy starting point (and it is one of the pieces of work reviewed by the TDT and Curee).

Fraser et al’s (2007) review of Teachers continuing professional development has some interesting observations about the relationship between formal/informal opportunities, collaborative endeavour, and a sense of ownership. Their conclusions suggest that:

approaches which are based on collaborative enquiry and that support teachers in reconstructing their own knowledge are most likely to lead to transformative

Which brings us to…


Learning Communities.

Searching for a Niche Group - Magnifying Glass
Photo Credit: infigicdigital via Compfight cc

The work of Dylan Wiliam (@DylanWiliam), a leading authority on both formative assessment and the model of staff working collaboratively in enquiry groups that he calls ‘Teacher Learning Communities’, has provided much of stimulus for the actual nuts and bolts of our programme for next year. This white paper on Sustaining Formative Assessment with Teacher Learning Communities is a must-read, while this webinar on Five Components of an Effective Teacher Learning Community provides similar ideas in a different format.

Another of the more practical reads comes from the work done in developing the NCSL’s Research and Development Kitbag work. The secondary phase case studies are well worth a read… Likewise, reading the NCSL’s Leading a Research Engaged School has proved helpful, particularly in relation to thinking about where we might look outside of our own school for research expertise (I’ve not read this lot yet, but may do…)

Photo Credit: First Joker via Compfight cc

Although the actual model that we are pursuing leans heavily on Wiliams’ work, the intellectual exercise of looking at the background research is, in my opinion, a worthwhile pursuit in itself. A couple of meaty examples come from work presented by Ray Bolam and colleagues:

…a group of people sharing and critically interrogating their practice in an ongoing, reflective, collaborative, inclusive, learning-oriented, growth-promoting way (Toole and Lewis, 2002); operating as a collective enterprise (King and Newmann, 2001). Summarising the literature, Hord (1997, p1) blended process and anticipated outcomes in defining a ‘professional community of learners’ (Astuto et al, 1993) as one “…in which the teachers in a school and its administrators continuously seek and share learning, and act on their learning. The goal of their actions is to enhance their effectiveness as professionals for the students’ benefit; thus, this arrangement

The key characteristics of such a community seem to boil down to:

  • shared values and vision
  • collective responsibility
  • reflective professional enquiry
  • collaboration
  • group, as well as individual, learning is promoted


More on collaborative professional learning.

Read an introduction to the idea of moving from CPD to JPD (Joint Practice Development) in this National College resource on Power Professional Learning: a school leader’s guide to joint practice development. This paper, from Aileen Kennedy at the University of Strathclyde, also explores perceptions of the idea of collaborative CPD and potential barriers, including a review of pertinent literature.

‘mark less, but mark better’

 The EEF’s review into marking, published last month, has come at an opportune time as we continue to embed and refine our approach to written formative feedback.

Is it groundbreaking? No. Is it worth a read anyway? Yes.

3242197170_3950b7b362_bPhoto Credit: *janine* via Compfight cc

We all know the score with regards to teacher workload – this report from the TUC in February of this year provides some interesting contextualising figures:

The most unpaid overtime is done by teachers and education professionals (with more than half of them working an average of 11.9 hours unpaid every week)

… and I’d wager that there are more than a few teachers who occasionally – or even routinely – do more than this average! While such figures will inevitably continue to colour the perception of many  outside the profession, making a challenging climate for recruitment even more so, our focus at the moment is on doing what we can to support those staff that are already in our school to help them find manageable balance.

According to the Government Response to the Workload Challenge, published in February last year, 53% of those who participated in the survey identified marking as one of the areas that represents opportunity to reduce workload (only ‘recording, inputting, monitoring and analysing data’ featured more often in responses, at 56%). At around the same time as this report was published, we started the process of rethinking our assessment policy…


Borrowing from a phrase that I’d heard Christine Harrison use at a conference where she spoke about her work on AfL, one of the guiding principles for a central policy that we knew needed to work across the whole school in a range of contexts was the idea that we wanted consistency of principle rather than needing uniformity of practice. To this, we added the mantra (in relation to written assessment) that it should be done at the right time, for the right reasons (that is ‘to support the progress of students‘ rather than ‘to prove to an observer/ inspector/ line manager that I do it’!), and off we set…

Much of the final document focussed on the written feedback (i.e. ‘marking’) side of things; the classroom-based side of assessment and feedback (i.e. the ‘short cycle’ formative assessment I referred to in this post) is picked up elsewhere through our focus on classroom practice in the learning and teaching programme. It sets out minimum expectations and core principles (whilst avoiding being unnecessarily directive or prescriptive)  in terms of frequency of formative feedback and the importance of students  being given time to reflect and respond to feedback (DIRT) etc. It also prompted us to make a few potentially risky decisions (for good reasons!), for example removing half-termly data drops, opting instead for a ‘live’ system. This allows subject areas and class teachers to add interim assessment data as and when summative assessments are completed, thus allowing schemes of learning to be planned and scheduled in a way that makes sense for the learning and development of ideas rather than scheduling them just so that the assessment data from that unit can be included in an arbitrary data drop each half term.

thumbs up

Photo Credit: MartinShapiro via Compfight cc

Is it all working perfectly? Not yet. However, we are convinced that the principles are the right ones and we are taking every opportunity to remind staff that we want them marking at the right times and for the right reasons: we care about staff wellbeing and we care about the learning experience of our students… if we want our staff to work sensible hours, then we recognise that there may be occasions where compromises have to be made: we want staff to make decisions about judicious, high quality use of the red pen in a way that maximises impact at key times, and ensure that time is prioritised for the planning of great learning experiences across lessons and units.

In this post on sharing effective practice in relation to marking and using this to refine practice across the school (emphasising the long-term developmental focus rather than short-term monitoring of compliance), Stephen Tierney (@leadinglearner) shares some interesting ideas on moving a policy from words on a page to tangible changes in practice. There are certainly some things for us to attend to in this respect over the remainder of the academic year – creating opportunities for staff to see the detail of what is happening around the school and what seems to be working, not only from the point of view of workload, but also from the point of view of what effective written feedback actually looks like: if we’re looking at written feedback from the point of view of economy and efficiency, we have to look carefully at what has the most impact. We have work to do around ensuring it has impact for the students: impact that they are able to reflect on in a meaningful way and can articulate – for their own benefit, but also to others – the specific links between the feedback they are being provided with and the progress they are making.

Clearly, this has to be done alongside ongoing reflection on what the research is telling us, both that which is being gleaned as part of the work of one of our Professional Learning Project groups and also the larger-scale and more robust findings of the EEF’s long-awaited review into marking, published last month. Although one of the key messages from the document is that the evidence is actually fairly scant in relation to the impact of marking, don’t let that put you off reading it – there are still suggestions that emerge from the research that does exist, and many of these are to do with the fine details of how we mark, rather than recommendations that would impact on the broad-strokes with which we have set out the principles of our assessment policy.


“Does our marking approach require our pupils to work to remember or reach the correct answer?” p12.

This is the question that I think struck me most in the whole report. It isn’t necessarily the most significant point, but given our recent reflection on the way we are using summative assessments formatively and the conclusions we’ve reached about the need for students to think hard for themselves, this seems like a potentially useful rule of thumb in terms of considering whether our marking is focussed on surface-level corrections or development of deeper understanding.

The contention presented by the EEF review seems to be that mistakes (something a student can do but has not on this occasion) should be marked as incorrect but left for students to correct themselves, while errors (resulting from misunderstanding or having not yet mastered something) should perhaps be dealt with by providing hints or questions to lead the students to developing a more complete and accurate understanding of the topic at hand.

Although the distinction between errors and mistakes isn’t made in the same way in this transcript from a talk given by Dylan Wiliam, he does place a similar emphasis on the idea that it should be a standard expectation that students are expected to do the thinking:

We suggested that instead of telling students that they got 15 out of 20, the [maths] teacher could, instead, tell them that five of their answers were wrong, and that they should find them and fix them.  The important feature of this feedback, like comment-only marking, is that it engages students, leaving them with something to do. This technique was subsequently adopted by English teachers when they provided feedback on students’ final drafts of writing assignments. Rather than correcting spelling, punctuation and grammar, the teachers put a letter in the margin for each error in that line using a G if for an error in grammar, an S for a spelling mistake, a P for a punctuation, and so on. For the stronger students, the teacher would simply put a dot rather than S/P/G in the margin for each error,  and for the weaker student, the teacher might indicate where in the line the error was.  The idea is that the feedback gives something to the learner to do so that the immediate reaction of the learner is that they have to think.

Elsewhere, in this fairly weighty review of research on formative feedback from Professor Valerie Shute, though not specifically about written formative feedback, there are some interesting comments about the idea of ‘directive feedback’ (providing corrective information) as opposed to ‘facilitative feedback’ (providing guidance and cues):

“Conventional wisdom suggests that facilitative feedback…would enhance learning more than directive feedback…yet this is not necessarily the case. In fact, some research has shown that directive feedback may actually be more helpful than facilitative—particularly for learners who are just learning a topic or content area (e.g., Knoblauch & Brannon, 1981; Moreno, 2004). Because scaffolding relates to the explicit support of learners during the learning process, scaffolded feedback in an educational setting may include models, cues, prompts, hints, partial solutions, as well as direct instruction (Hartman, 2002). Scaffolding is gradually removed as students gain their cognitive footing, thus directive feedback may be most helpful during the early stages of learning. Facilitative feedback may be more helpful later on, and the question is when. According to Vygotsky (1987), external scaffolds can be removed when the learner develops more sophisticated cognitive systems, where the system of knowledge itself becomes part of the scaffold for new learning.”

So, like everything else to do with learning and teaching, it isn’t straightforward (and this is before we’ve even get into John Hattie and Helen Timperley’s work on the power of feedback and the importance of considering the ‘level’ at which feedback is directed: ‘task’, ‘process’, ‘self-regulation’ or just ‘self’…)

It is unlikely to be easy to prescribe for teachers in simple black and white exactly what they should do and when… and nor should we need to, if we have faith in their professional judgement and intuition – based on a detailed understanding of each individual – about what sort of feedback is most appropriate at any point in time for any single student. Perhaps the emphasis should be on making sure that our staff are proficient in working with a range of strategies and that they have an appreciation for the rationale behind which strategies can work and when, then trust them to make the right decisions.

Photo Credit: nicolegalpern1 via Compfight cc


The EEF review makes various other suggestions based on the evidence that is already available, some of it reassuring in terms of the direction we are heading, some of it giving pause for thought and highlighting areas that we would do well to put more thought to… but then if anyone in education reads a report like this and concludes that they’ve got it all nailed already, I suspect we could say that they either haven’t read the report properly, they haven’t really understood it, or they don’t really have a true appreciation of what is going on in their own setting…

Is anything in the report groundbreaking? No, but it offers some tangible suggestions around which we could do some developmental work with staff to ensure we get the most impact for the most reasonable amount of input. I’m looking forward to reading whatever comes next…



As we continue to look outside our own school to learn from others, I’m also rather intrigued by the idea of ‘marking the Michaela way’, which seems like a minimal (to say the least) whole-school approach to marking which could have a lot going for it… A thought provoking read!

A few more interesting reads…