CPD, Learning Communities and research…

This year we dabbled with having all staff working in groups on Professional Learning Projects – we’re gearing up to celebrate the impact that these have had at our INSET day later this term. The idea, looking ahead, is to move towards a staff development model brings us closer to long-term, collaborative teacher learning groups: giving staff time and space to work together using an approach that is rooted in enquiry and reflection, informed by research and reading (taking us away from having ‘led’ sessions as the backbone, where an ‘expert’ tells everyone lots of good ideas)…

As part of the review and planning, I’ve invested considerable time in reading and researching what other leading schools are doing, and looking at how this nests within the research and evidence base. As part of that process, I thought I would assemble some of the high-quality literature that has been invaluable for me over the last few months that is informing the exciting plans for 2016-17 (and beyond) to serve as a platform for others…

young business people group have meeting and working in modern b
Photo Credit: First Joker via Compfight cc

A more detailed overview of our model will follow once we’ve pinned down the details (and done some magpie-ing from other schools leading the way!), but here is a sample from a much bigger body of reading that is informing our plans for professional learning…


The (general) research on Professional Development.

The Teacher Development Trust’s (@TeacherDevTrust) Developing Great Teaching is a great starting point for looking at what the research suggests works and what doesn’t.

The Centre for ths Use of Research Evidence in Education (CUREE, @Curee_official) have produced an equally accesible introduction to the research around teacher development in their report, Understanding What Enables High Quality Professional Learning (I particularly like the distinction in thinking about ‘professional development’ and ‘professional learning’). Equally, The Sutton Trust’s (@suttontrust) report on Developing Teachers contains some useful suggestions and insight to get the cogs turning.

I can’t pretend to have read the whole thing, but I keep telling myself that at some point I will work through the full text of Helen Timperley’s ENORMOUS best evidence synthesis on Teacher Professional Learning and Development. However, this summary of Timperley’s work by Mike Bell over at the Evidence Based Teachers Network is an easy starting point (and it is one of the pieces of work reviewed by the TDT and Curee).

Fraser et al’s (2007) review of Teachers continuing professional development has some interesting observations about the relationship between formal/informal opportunities, collaborative endeavour, and a sense of ownership. Their conclusions suggest that:

approaches which are based on collaborative enquiry and that support teachers in reconstructing their own knowledge are most likely to lead to transformative

Which brings us to…


Learning Communities.

Searching for a Niche Group - Magnifying Glass
Photo Credit: infigicdigital via Compfight cc

The work of Dylan Wiliam (@DylanWiliam), a leading authority on both formative assessment and the model of staff working collaboratively in enquiry groups that he calls ‘Teacher Learning Communities’, has provided much of stimulus for the actual nuts and bolts of our programme for next year. This white paper on Sustaining Formative Assessment with Teacher Learning Communities is a must-read, while this webinar on Five Components of an Effective Teacher Learning Community provides similar ideas in a different format.

Another of the more practical reads comes from the work done in developing the NCSL’s Research and Development Kitbag work. The secondary phase case studies are well worth a read… Likewise, reading the NCSL’s Leading a Research Engaged School has proved helpful, particularly in relation to thinking about where we might look outside of our own school for research expertise (I’ve not read this lot yet, but may do…)

Photo Credit: First Joker via Compfight cc

Although the actual model that we are pursuing leans heavily on Wiliams’ work, the intellectual exercise of looking at the background research is, in my opinion, a worthwhile pursuit in itself. A couple of meaty examples come from work presented by Ray Bolam and colleagues:

…a group of people sharing and critically interrogating their practice in an ongoing, reflective, collaborative, inclusive, learning-oriented, growth-promoting way (Toole and Lewis, 2002); operating as a collective enterprise (King and Newmann, 2001). Summarising the literature, Hord (1997, p1) blended process and anticipated outcomes in defining a ‘professional community of learners’ (Astuto et al, 1993) as one “…in which the teachers in a school and its administrators continuously seek and share learning, and act on their learning. The goal of their actions is to enhance their effectiveness as professionals for the students’ benefit; thus, this arrangement

The key characteristics of such a community seem to boil down to:

  • shared values and vision
  • collective responsibility
  • reflective professional enquiry
  • collaboration
  • group, as well as individual, learning is promoted


More on collaborative professional learning.

Read an introduction to the idea of moving from CPD to JPD (Joint Practice Development) in this National College resource on Power Professional Learning: a school leader’s guide to joint practice development. This paper, from Aileen Kennedy at the University of Strathclyde, also explores perceptions of the idea of collaborative CPD and potential barriers, including a review of pertinent literature.

Jigsawing your seating…

#15MinForum – 24/5/16

This week’s 15 Minute Forum was led by Richard Stansbridge, one of our Year Leaders and a Geography teacher… and DT teacher… and BTEC teacher…. something of an all-rounder, you might say! Richard presented a great idea related to jigsawing group work…

Search For Solution
Photo Credit: cfdtfep via Compfight cc

The power of peers

At its heart, this is a strategy that relies on students working cooperatively in a first group task to develop/ extend their own understanding of a given topic, before then peer-teaching in a second group task with a new team.

In Dylan Wiliam’s (@dylanwiliam) Embedded Formative Assessment (2011), he identifies four main factors that emerge from the research that lead to the profound effects that cooperative learning can have:

  • Motivation. Students help their peers learn because, in well-structured cooperative learning settings, it is in their own interests to do so, and so effort is increased.
  • Social Cohesion. Students help their peers because they care about the group, again leading to increased effort.
  • Personalisation. Students learn more because their more able peers can engage with the particular difficulties a student is having.
  • Cognitive elaboration. Those who provide help in group settings are forced to think through the idea more clearly.

Elsewhere, in Visible Learning for Teachers (2011), John Hattie (@john_hattie) says of peer tutoring:

the effects are as great on the tutor as on the person being tutored… students learn much more when they become their own teachers (and teachers of others)… when students become teachers of others, they learn as much as those they are teaching.

So, some solid reasons to explore the strategy, play with it, refine it and embed it… (no wonder Phil Beadle has been quoted, in this post at least, as calling it “the ultimate of all teaching techniques”!)


Running the activity…

The first part of the activity involves students working in groups on a given ‘topic’ to become an expert. Richard explained that for him, this starts first with thinking, recall and sharing of initial ideas (I’m a big fan of the mantra that all group activity starts with individual thought!). Working as a group, students then draw on a range of resources (either using their own research skills, work from previous lessons, or carefully chosen resources shared by the teacher) to develop their own understanding and to prepare new resources with which they will later teach others (continually reminding the students of this responsibility to ensure they are striving for high quality!). There is a great opportunity at this point to engage students in agreeing success criteria for the teaching resources they are developing for the next phase of the activity…

Step 1 – collaborative learning

Students then move to their new groups with the responsibility of teaching the people in their new group… with an emphasis on ‘teaching’! Richard explained that he will typically have encouraged them to have developed some sort of resource to teach from, and will provide them with additional resources with which to do the teaching (mini white boards etc). Nonetheless, there will still be a few individuals who need reminding (and possibly supporting) to do something other than simply say “here are my notes – copy them!” Again, there is scope here for working with students to reflect on what good teaching resources look like, forcing them to think about their own learning in the process. Likewise, there is an opportunity to support students with considering the language they use to challenge and support each other in this group context.

Step 2 – peer teaching



Photo Credit: maged.apps3 via Compfight cc

What is the key to making it work?

Planning, planning, planning. Although Richard felt that all of the potential challenges of working in this way can be mitigated with careful planning and assertive behaviour and classroom management strategies, he did highlight a few key areas which should be considered, including:

  • The resources being used in both stages. What do you want the students to use? What do you want students to develop as a teaching resource? Have they got everything they need and do they know how to use them effectively?
  • Quality control. How do you know the peer-teachers are teaching the right thing? This brings to the fore the potential tension between the process of student learning and the coverage of curriculum content. Richard’s response was that having students work in this way frees the teacher up entirely to circulate, listening and observing, offering insightful, timely feedback and prompting as appropriate, which should take care of most of the concern. That said, it will always need following up at some point!
  • The level of challenge. Are they learning a new topic or developing something about which they already have some ideas? Pitch it too high or too low and the group dynamics could be affected… Or is it a revision task with the emphasis on resource construction? Hattie identifies that cooperative learning tends to be “most powerful after the students have acquired sufficient surface knowledge to then be involved in class discussion and learning with their peers”
  • The groupings. Perhaps the biggest factor – are the groups based on separating out certain characters? How are you going to ensure individual learning needs are effectively supported? Are the groups mixed ability or similar ability? Richard expressed his personal preference for mixed, and the research backs him up: Marzano, Pickering and Pollock, in their review of the existing research on cooperative learning (Classroom Instruction That Works (2001), p87), identify a strong effect size for heterogeneous (mixed) grouping compared to homogeneous (similar) grouping…

…students of low ability actually perform worse when they are placed in homogeneous groups with students of low ability—as opposed to students of low ability placed in heterogeneous groups. This is evidenced by the negative effect size of –.60. In addition, the effect of homogeneous grouping on highability students is positive but small (.09). It is the medium-ability students who benefit the most from homogeneous grouping (ES = .51).

Richard closed the session with a few suggestions for developing the idea further, including the use of roving reporters and envoys, or expectations around final presentations back to the class……

This post from Alex Quigley (@HuntingEnglish) offers some sound advice around setting up groupwork generally.

This post  from David Didau (@LearningSpy) has a few of his own reflections on jigsawing that are worth a read.

Wooden mannequins pushing puzzle pieces into the right place
Photo Credit: alihassanabbady via Compfight cc

Are they REALLY learning?…

Challoner 10 poster CLASSROOMAmongst our Challoner 10 you will find ‘high expectations’, ‘total engagement’ ‘differentiation to challenge and support’ and ‘effective questioning’. Although as a starting point for our journey which sets out a shared vision for learning & teaching it has been a very useful document, I do sometimes think that by drawing out the distinct themes in the way we have, we run the risk of implying to staff that they should be viewed as discrete areas of classroom practice. The reality is far from this – the intricate web that these related areas form is almost too complex to disentangle.

The relationship beween these four areas in particular has been at the front of my mind in the last few weeks as we’ve continued our process of developmental learning walks and seen a fantastic range of learning strategies and teaching approaches in a range of subject areas, across a range of teaching groups. With public exams at KS4 and post-16, and internal exams for years 7 and 9, a lot students have their heads in the books revising, both in lessons and in study areas around the school. It has been interesting to see how some students, when given autonomy to choose how best to revise, end up just sitting and re-reading sections of notes or of a textbook, moving me to ask the question:really learning

This observation was connected, at least in part, to the reflections from our recent 15 Minute Forum on using summative assessments for formative learning activities, where we explored the idea that reviewing a test or assessment should rarely (if ever) involve simply giving the students a correct answer – far better to engage them in thinking about how their existing model of understanding needs to be adjusted or developed in order to reach the correct answer themselves. Equally, the review of evidence on marking from the EEF which I discussed here, raises an important question about whether our marking (and perhaps our feedback more generally) requires pupils to work to remember or to reach a correct answer.

In a lecture given by Professor Rob Coe (@ProfCoe) a couple of years ago entitled ‘Improving Education: a triumph of hope over experience’ (available as a written report here), he explores the idea, amongst many others, that ‘learning’ is not easily observed…


David Didau (@LearningSpy) has written a nice blog post looking at the idea of proxies here, which is worth a few minutes of your time.

So what are we to do?

Well, amongst Professor Coe’s suggestions are striving for clarity around what learning actually ‘is’ and how it happens, and then investing heavily in sustained professional development to share this understanding and strive to embed learning and teaching strategies that are truly focussed on an informed understanding of what learning ‘is’. This will be the backbone of our professional development plans for 2016-17.

In the mean time, he also offers a simple suggestion:

think hard

By his own admission, “obviously, this is over-simplistic, vague and not original”. However, if it forces us – and our students – to ask themselves the question…

‘Where in this lesson will students have to think hard?’

…it may be a very useful rule of thumb.


Photo Credit: paulgabrinetti via Compfight cc

‘mark less, but mark better’

 The EEF’s review into marking, published last month, has come at an opportune time as we continue to embed and refine our approach to written formative feedback.

Is it groundbreaking? No. Is it worth a read anyway? Yes.

3242197170_3950b7b362_bPhoto Credit: *janine* via Compfight cc

We all know the score with regards to teacher workload – this report from the TUC in February of this year provides some interesting contextualising figures:

The most unpaid overtime is done by teachers and education professionals (with more than half of them working an average of 11.9 hours unpaid every week)

… and I’d wager that there are more than a few teachers who occasionally – or even routinely – do more than this average! While such figures will inevitably continue to colour the perception of many  outside the profession, making a challenging climate for recruitment even more so, our focus at the moment is on doing what we can to support those staff that are already in our school to help them find manageable balance.

According to the Government Response to the Workload Challenge, published in February last year, 53% of those who participated in the survey identified marking as one of the areas that represents opportunity to reduce workload (only ‘recording, inputting, monitoring and analysing data’ featured more often in responses, at 56%). At around the same time as this report was published, we started the process of rethinking our assessment policy…


Borrowing from a phrase that I’d heard Christine Harrison use at a conference where she spoke about her work on AfL, one of the guiding principles for a central policy that we knew needed to work across the whole school in a range of contexts was the idea that we wanted consistency of principle rather than needing uniformity of practice. To this, we added the mantra (in relation to written assessment) that it should be done at the right time, for the right reasons (that is ‘to support the progress of students‘ rather than ‘to prove to an observer/ inspector/ line manager that I do it’!), and off we set…

Much of the final document focussed on the written feedback (i.e. ‘marking’) side of things; the classroom-based side of assessment and feedback (i.e. the ‘short cycle’ formative assessment I referred to in this post) is picked up elsewhere through our focus on classroom practice in the learning and teaching programme. It sets out minimum expectations and core principles (whilst avoiding being unnecessarily directive or prescriptive)  in terms of frequency of formative feedback and the importance of students  being given time to reflect and respond to feedback (DIRT) etc. It also prompted us to make a few potentially risky decisions (for good reasons!), for example removing half-termly data drops, opting instead for a ‘live’ system. This allows subject areas and class teachers to add interim assessment data as and when summative assessments are completed, thus allowing schemes of learning to be planned and scheduled in a way that makes sense for the learning and development of ideas rather than scheduling them just so that the assessment data from that unit can be included in an arbitrary data drop each half term.

thumbs up

Photo Credit: MartinShapiro via Compfight cc

Is it all working perfectly? Not yet. However, we are convinced that the principles are the right ones and we are taking every opportunity to remind staff that we want them marking at the right times and for the right reasons: we care about staff wellbeing and we care about the learning experience of our students… if we want our staff to work sensible hours, then we recognise that there may be occasions where compromises have to be made: we want staff to make decisions about judicious, high quality use of the red pen in a way that maximises impact at key times, and ensure that time is prioritised for the planning of great learning experiences across lessons and units.

In this post on sharing effective practice in relation to marking and using this to refine practice across the school (emphasising the long-term developmental focus rather than short-term monitoring of compliance), Stephen Tierney (@leadinglearner) shares some interesting ideas on moving a policy from words on a page to tangible changes in practice. There are certainly some things for us to attend to in this respect over the remainder of the academic year – creating opportunities for staff to see the detail of what is happening around the school and what seems to be working, not only from the point of view of workload, but also from the point of view of what effective written feedback actually looks like: if we’re looking at written feedback from the point of view of economy and efficiency, we have to look carefully at what has the most impact. We have work to do around ensuring it has impact for the students: impact that they are able to reflect on in a meaningful way and can articulate – for their own benefit, but also to others – the specific links between the feedback they are being provided with and the progress they are making.

Clearly, this has to be done alongside ongoing reflection on what the research is telling us, both that which is being gleaned as part of the work of one of our Professional Learning Project groups and also the larger-scale and more robust findings of the EEF’s long-awaited review into marking, published last month. Although one of the key messages from the document is that the evidence is actually fairly scant in relation to the impact of marking, don’t let that put you off reading it – there are still suggestions that emerge from the research that does exist, and many of these are to do with the fine details of how we mark, rather than recommendations that would impact on the broad-strokes with which we have set out the principles of our assessment policy.


“Does our marking approach require our pupils to work to remember or reach the correct answer?” p12.

This is the question that I think struck me most in the whole report. It isn’t necessarily the most significant point, but given our recent reflection on the way we are using summative assessments formatively and the conclusions we’ve reached about the need for students to think hard for themselves, this seems like a potentially useful rule of thumb in terms of considering whether our marking is focussed on surface-level corrections or development of deeper understanding.

The contention presented by the EEF review seems to be that mistakes (something a student can do but has not on this occasion) should be marked as incorrect but left for students to correct themselves, while errors (resulting from misunderstanding or having not yet mastered something) should perhaps be dealt with by providing hints or questions to lead the students to developing a more complete and accurate understanding of the topic at hand.

Although the distinction between errors and mistakes isn’t made in the same way in this transcript from a talk given by Dylan Wiliam, he does place a similar emphasis on the idea that it should be a standard expectation that students are expected to do the thinking:

We suggested that instead of telling students that they got 15 out of 20, the [maths] teacher could, instead, tell them that five of their answers were wrong, and that they should find them and fix them.  The important feature of this feedback, like comment-only marking, is that it engages students, leaving them with something to do. This technique was subsequently adopted by English teachers when they provided feedback on students’ final drafts of writing assignments. Rather than correcting spelling, punctuation and grammar, the teachers put a letter in the margin for each error in that line using a G if for an error in grammar, an S for a spelling mistake, a P for a punctuation, and so on. For the stronger students, the teacher would simply put a dot rather than S/P/G in the margin for each error,  and for the weaker student, the teacher might indicate where in the line the error was.  The idea is that the feedback gives something to the learner to do so that the immediate reaction of the learner is that they have to think.

Elsewhere, in this fairly weighty review of research on formative feedback from Professor Valerie Shute, though not specifically about written formative feedback, there are some interesting comments about the idea of ‘directive feedback’ (providing corrective information) as opposed to ‘facilitative feedback’ (providing guidance and cues):

“Conventional wisdom suggests that facilitative feedback…would enhance learning more than directive feedback…yet this is not necessarily the case. In fact, some research has shown that directive feedback may actually be more helpful than facilitative—particularly for learners who are just learning a topic or content area (e.g., Knoblauch & Brannon, 1981; Moreno, 2004). Because scaffolding relates to the explicit support of learners during the learning process, scaffolded feedback in an educational setting may include models, cues, prompts, hints, partial solutions, as well as direct instruction (Hartman, 2002). Scaffolding is gradually removed as students gain their cognitive footing, thus directive feedback may be most helpful during the early stages of learning. Facilitative feedback may be more helpful later on, and the question is when. According to Vygotsky (1987), external scaffolds can be removed when the learner develops more sophisticated cognitive systems, where the system of knowledge itself becomes part of the scaffold for new learning.”

So, like everything else to do with learning and teaching, it isn’t straightforward (and this is before we’ve even get into John Hattie and Helen Timperley’s work on the power of feedback and the importance of considering the ‘level’ at which feedback is directed: ‘task’, ‘process’, ‘self-regulation’ or just ‘self’…)

It is unlikely to be easy to prescribe for teachers in simple black and white exactly what they should do and when… and nor should we need to, if we have faith in their professional judgement and intuition – based on a detailed understanding of each individual – about what sort of feedback is most appropriate at any point in time for any single student. Perhaps the emphasis should be on making sure that our staff are proficient in working with a range of strategies and that they have an appreciation for the rationale behind which strategies can work and when, then trust them to make the right decisions.

Photo Credit: nicolegalpern1 via Compfight cc


The EEF review makes various other suggestions based on the evidence that is already available, some of it reassuring in terms of the direction we are heading, some of it giving pause for thought and highlighting areas that we would do well to put more thought to… but then if anyone in education reads a report like this and concludes that they’ve got it all nailed already, I suspect we could say that they either haven’t read the report properly, they haven’t really understood it, or they don’t really have a true appreciation of what is going on in their own setting…

Is anything in the report groundbreaking? No, but it offers some tangible suggestions around which we could do some developmental work with staff to ensure we get the most impact for the most reasonable amount of input. I’m looking forward to reading whatever comes next…



As we continue to look outside our own school to learn from others, I’m also rather intrigued by the idea of ‘marking the Michaela way’, which seems like a minimal (to say the least) whole-school approach to marking which could have a lot going for it… A thought provoking read!

A few more interesting reads…

Making summative assessments formative

#15MinForum – 10/5/16

This week’s 15 Minute Forum was led by Nikki Cloudsdale (@NikkiCloudsdale), our Maths Subject Leader, with some suggestions for taking summative assessment tasks (tests, exam papers etc) and turning them into formative learning strategies (perfect timing, having just finished internal exams for year’s 7 and 9, and with exam groups in years 10-13 ploughing through past papers!)…

Teachers Marking
Photo Credit: superdave3451 via Compfight cc

In a webinar that I attended a couple of months ago (if one ‘attends’ a webinar?!), I listened to Terry Morgan and Beth Carr from the Learning Sciences Dylan Wiliam Center talk about ‘The Case for Short Cycle Formative Assessment‘. Much of the webinar, like the others in the series, revolved around unpacking the key dimensions of formative assessment (and the compelling evidence base that underpins it):

formative assessment overview

Credit ‘The Case for Short Cycle Formative Assessment

In this particular webinar, this was specifically set in the context of short-cycle formative assessment (i.e. the minute-by-minute processes) as opposed to medium- or long-cycles, which look at assessment between teaching units or across units and terms:short cycleCredit ‘The Case for Short Cycle Formative Assessment

Listening to Nikki talking through some of the strategies that the maths department are playing with at the moment, I was struck by the potential for some of the strategies to help tie the different ends of the cycles together, taking summative assessments completed to assess progress across entire units or even entire key stages, and then using them as the basis for learning activities rooted back in the short-cycle formative assessment processes.

Nikki started by outlining the journey that the Maths department has been on in the last 18 months with regards to assessment within the department, highlighting some of the features of their current approach, including tracking sheets that students keep in their exercise books on which they record progress in various tests alongside the key feedback from their teacher (in the form of WWW/EBI) for each key objective; formative assessment tasks and feedback as they move through a unit of work; then a final summative assessment which is graded. But, as Nikki herself put it, the grade is just one part of their feedback – the most important part comes next…

At the heart of the strategies listed below, is the idea that reviewing a summative assessment should only rarely (if ever?!) be about simply showing the students a mark scheme and talking through it. Rather, it should be about engaging the students in meaningful reflection – with them in the driver’s seat – on the hows, whys and wheres of their mistakes and misunderstandings in order to help them move their learning forwards.

Here are a few of the ideas:

  • Peer assessment of a selection of responses to a question: project photos of selected responses or share them on nearpod, and charge students with marking it (without looking at the mark scheme!) and then perfecting it. Round it off by asking students to suggest a suitable improvement target for this individual, forcing students to think about the skill, exam technique, or approach (metacognition!)
  • Champions of a question: identify areas of strength within the test and make every pupil a champion of some aspect of the assessment. Provide opportunity for students to move around to find the solution from the champion, or use a jigsaw seating approach to help structure the sharing and collaborating (activate students as resoures for each other!)
  • Develop a model solution/answer: working in small groups or pairs, task students with developing model solutions/answers for themselves (taking ownership of their own learning!) before then rearranging groups for peer-teaching (activating them as resources!)
  • And, of course, at the end, having used the assessment to identify priority areas (some sort of gap analysis), point students in the direction of specific resources that they can use to support them with these specific priority areas… and provide them with tme to actually do it!

I wonder if we might really get them thinking carefully if, when marking the work, we were to give the question a score, but not use any ticks to show which aspects of the answer gained the credit. The students are then tasked with devising their own mark scheme to work out for themselves specifically where they did and didn’t gain the marks…

In all of these strategies, we could argue that the teacher should be acting as a conductor rather than a controller, freeing them up to coach and probe students in the construction of their own understanding, rather than simply telling them what the mark scheme wanted them to write…

Doing so will surely encourage the students to be full involved in reflecting on the important part of any test: what it is that they need to attend to in order to move forward… (but then we all know that, according to John Hattie at least, the point of testing is not really about telling the students anything at all, but rather it is to find out what you as the teacher did well – who did you teach well and who not so well, what did you teach well and what not so well, and so on“………….)