Our data hasn’t had a haircut in a while. To be honest, we’ve allowed it to grow and we looked in the mirror and realised that it looks a bit… well, unkempt. It is time to cut it all back, but the question is how short can we get away with?
So, what is the minimum amount of data a school can collect about students centrally whilst being an effective school?
Sitting behind this question is the desire to make our data systems simpler, easier and all about action. Partly this is about workload, but it is also driven by a creeping doubt about the value of some of the data we collect.
But before we head off to the barbers, lets take a look in the mirror. Just how much data is there, and which bits need trimming?
The data we collect falls in to two camps which we’ve called live data and periodic data. Live data includes data collected every day by teachers; taking a register, recording minutes late, issuing merits, flagging missed homework and missing equipment. Before the advent of SIMS, teachers captured this data in their markbooks using their own codes and systems. What has changed is the requirement to do this on a central database. The ability to capture and process this data has been beneficial. For example, we can monitor patterns of homework completion and act quickly when a student starts to miss deadlines. This does not necessarily mean a punitive response. For a normally reliable student a sequence of missed homeworks may be a sign of problems at home or that they are struggling with workload. An effective pastoral system looks for signs of problems and patterns of behaviour. However, this intelligence comes at a cost which can be measured in terms of teachers’ time and lost learning time. We must ensure that for the system as a whole the potential benefits outweigh the collection costs. We must also make sure that the data collected results in action, which in turn changes behaviour; in other words that the benefits are realised. We can draw some simple data collection principles from this example, as follows:
- Have a clear rationale for collecting the data
- Ensure the data results in action
- Ensure the action has beneficial impact on students
- Ensure the benefits exceed the cost of collection
Periodic data is the information we ask teachers to provide at the end of a period of time. In our case, this is typically three times a year (already less frequent than many schools) and most of this data is also reported to parents. Our periodic data collection includes ‘metrics’ on behaviours (e.g. attitude, effort, behaviour), attainment data (e.g. actual grades and grade predictions) and qualitative data in the form of written reports. This data collection differs slightly for each key stage, but happens for each year group (7 to 13) and results in a total of 19 data collection points over the academic year, about one every two school weeks. It is unlikely that a teacher will teach every year group, but not unusual for them to teach between 4 and 5 year groups, therefore required to enter this type of data every three weeks.
Arguably, all of the above data is worth collecting. It allows us to send parents regular updates and commentary of their child’s education, which they value. It allows us to work out which students most need our attention. We do stuff with this data, I am confident, which results in benefits to the students. But to come back to our key question, what is the minimum we could collect whilst achieving at least the same impact? Can we really claim that our system is simple, easy and all about action?
Whilst some of the trimming will be around the live data, we believe it is the periodic data where the scissors are needed most. You can decide if the haircut is too severe.
Underpinning our proposed system is the belief that progress can be defined as ‘the acquisition of the required knowledge’. I appreciate that definition will be controversial to some, but here is not the place to debate it. Right or wrong, it sure as hell helps simplify data collection if you run with it. It also makes life easier if you accept the definition of learning as a change in long-term memory (Kirschner, Sweller and Clark). Assessing students soon after teaching will not indicate learning. Looking for what students can do in the moment, rather than what they know and how they can apply this knowledge at a later date, will provide a false view of progress. This view of progress takes us down a path away from levels and skills ladders, away from flight paths, away from collecting grades from frequent tests and assessed work, and (most importantly) away from a belief that there is any point in frequently collecting ‘performance’ data.
What we are left with is the need to carry out a robust assessment of what students know after an extended period of time; tests. We have chosen to carry out these tests twice a year for core subjects and once a year for all others (for years 7 to 10 – we have retained the traditional mock exams for Year 11). We believe that to do these more often would not allow us to sample a sufficiently large domain of knowledge to achieve a reliable inference about what has been learnt. There are controls around how these tests are set. These include moderation of the papers to ensure quality, limiting access to the papers before the tests by the teachers to prevent teaching to the test, and students sitting the papers in controlled conditions (all at the same time in the exam hall) so that we eliminate extraneous factors influencing results as far as possible. It is important to us that students do not over-prepare for these tests; we are trying to assess what they have learnt and not how hard they can cram. Therefore, we have issued limits to teachers on what preparation work they should expect from students and guidance to parents about the importance of not over-preparing. We do not want these tests to be high-stakes for students, so there are no consequences resulting from a student’s performance e.g. setting, retakes or corrections. These measures, in combination, should create conditions in which the outcomes of the test tell us a lot about what students have learnt. No doubt we will have improvements to make after our first attempt (with Year 7 and 8 this term), but we are confident that we’ve made these test as robust as possible.
Given one of our main motivations is to reduce workload, it would be reasonable to ask at this point how introducing these tests will help. Firstly, the tests will reduce the need to carry out as many formal assessments throughout the year (with all the marking and moderation that this involves). We have dismantled our post-levels assessment system which required frequent allocation of grades to pieces of work and for these grades to be reported. In doing so, we have increased a department’s autonomy to make decisions about how best to assess students over the year so that they can create a system which meets their needs. Secondly, the tests have been designed to minimise marking. Many subjects have made use of multiple choice questions to test aspects of the syllabus and we have invested in optical mark reading technology to mark the papers and produce question-level analysis. We have also employed invigilators which means teachers can use the freed time from lessons to get ahead with marking any written answers. Perhaps most significantly, the use of test data as our key measure of attainment means the amount of data entry for teacher over the course of the year is reduced. The test marks will be standardised as a scaled score for teachers and no data entry will be required.
Capturing a measure of progress only once or twice a year is, we believe, educationally the right thing to do but it leaves us feeling quite exposed (not least of all to Ofsted who make the right noises about not pre-judging how schools monitor progress, but our recent experience makes us cynical that inspectors can break away from frequent data drops and flight paths). We are also conscious of not undervaluing the professional judgement of teachers, who make lesson-by-lesson assessments of students’ understanding, and of their learning over time. The teacher’s cumulative assessments (informal and formal) over time should inform our actions in some way. We questioned, however, whether we needed to capture this judgement centrally. After all, if a teacher assesses that a student does not grasp what they have been taught, it is better that they address this there and then. By passing on information about performance, does this create the expectation that reporting performance/under-performance is sufficient, rather than getting on with addressing knowledge gaps, and that someone else will deal with the problem? Following much debate, the decision was made that a teacher judgement should continue to be captured centrally so that patterns across subjects could be established. However, following our principles of simpler, easier and more action, we decided to change the way this judgement was to be collected and what would happen as a consequence (more later).
In summary, our data collection haircut in relation to student progress has left two simple metrics:
- Scaled scores for tests taken once or twice a year
- A periodic teacher judgement about progress based on diagnostic and department-standardised assessed work.
This data collection is significantly lighter than in our old system and we believe it will provide a much more reliable indicator of progress.
We all know that students will make better progress if they adopt certain behaviours; listening, staying on task and completing homework, to name a few. If these learning behaviours are in place, barring any unaddressed learning difficulty or emotional turmoil, students will learn what they are taught. Teachers will, of course, promote desirable behaviours but schools capture data on this also for two reasons. Firstly, so that pastoral teams can identify patterns of behaviour and act to address these and secondly so that reports can be made to parents.
Waiting for reports to parents to pick up patterns of behaviour is leaving it too late. Much of our data entry around learning behaviours was therefore only serving the purpose of informing parents. To be able to address problems quickly, we found that live data capture was much more useful and more likely to result in action. If parents were taken out of the equation, the periodic collection of data about learning behaviours would add little value. If an overview were required, the aggregation of the live data would cover most aspects of a student’s behaviour, with the exception of a teacher’s judgement about the students’ effort which was not being collected live.
Adding parents back in to the equation, periodic reports covering learning behaviours have the following limitations:
- Parents should know before receiving the report if all is not well as personal contact should have been made by the teacher or tutor, so the report either acts as an unpleasant surprise or rubs salt in the wound.
- After receiving reports, parents often have no chance to work with the school to address any issues (we have one parent’s evening a year and three reporting points).
- Students can explain away any problems, and sadly some parents are inclined to believe that the teacher ‘just doesn’t like them’ or ‘can’t teach anyway’.
For the purpose of addressing deficits in a student’s attitude and behaviour, data and written reports fall short. However, for those students who were doing all the right things, we also recognised that reports can provide positive reinforcement and that nice ‘warm glow’ you get as a parent. If we were to reduce data input around reports we would also need to find alternative ways to recognise desirable behaviours.
We decided to separate the multiple purposes of the data collection which were:
- To inform parents about attitude and behaviour
- To inform our picture of students causing concern
- To praise students who met or exceeded our expectations
In short, we decided to scrap reports to parents which included metrics on behaviour and written reports.
Following our principles of simpler, easier and more action, we have fixed three points in the year (rather than the 19 data-points previously) when we will ask teachers to input their judgement on student progress and flag any concerns about a student’s effort – a yellow flag for some concerns and red flag for serious concerns. Importantly, the system will default to ‘meeting expectations’, in doing so significantly reducing the data entry time. The data entry for all classes will be carried out on the same day, the time for which is scheduled as directed time and part of the 1265 allocation (previously all data entry and reporting was seen as additional to the 1265 directed time). This data will then be combined with the aggregated live data (attendance, punctuality, homework and behaviour warnings) to form a profile of student concerns. An academic and pastoral team will then use this data to allocate students in a staged concern system. Each student in this system will then be allocated a key worker, be it tutor, Head of Year or senior member of staff. Parents will then be invited in to discuss the concerns and agree what is to be done to address these. These parent meetings also happen in directed time, three times per year. This approach replaces heavy data collection and summative reports to parents with action where it is most needed. For parents of students causing concern, we replace a negative snapshot of the child’s attitude and behaviour with regular opportunities to work with the school to improve the situation.
In summary, our data collection haircut in relation to learning behaviours has left two sources of data:
- Live data about attendance, punctuality, homework and behaviour which can be analysed and acted on quickly.
- Concern flags for effort which can be inputted quickly, in directed time, three times a year.
This data collection is significantly lighter than in our old system and we believe it will result in more assertive action to address undesirable behaviours.
Simpler, easier and all about action
One of the most significant problems with our old data system was its complexity. Periodic reporting went on throughout the year in a never-ending cycle. Teachers would miss deadlines and be caught out by pressure points where reports were due at the same time as competing priorities such as exam marking. With only three data-entry points, life should be simpler.
It is difficult to predict how much time will be saved. There will be considerably less data entry, but a little more time spent with parents and in marking tests. This seems like a better use of time and if we can keep the work within a reasonable working day, rather than data entry late at night, that will be a success.
The approach we are taking makes better use of the data we will collect. We know that we will do something with what we collect. Whether this results in tangible benefits for students, time will tell.
There is one other principle which we haven’t mentioned, but is perhaps the most important; trusting teachers. In creating our towers of data, we have (unwittingly or not) created an impression that we don’t entirely trust teachers to do their job. We collect data to check students are making progress, we collect data to measure the teacher’s effectiveness, and we collect data to ‘intervene’ because the teacher has let students fall behind. That may not have been the intention, but many teachers will tell you that this is what they feel is implied. Teachers process and act on more data every minute in the classroom than we can collect in a spreadsheet over a whole year. Our systems will only ever add marginal gains to the expertise of the teacher and we should be cautious about distracting teachers from the task in hand.
We are emerging from the barbers with our new data-haircut. It is not quite a crew cut, but it is as short as we dare go, lest we end up in isolation. It’s smarter and won’t need much maintenance. Let’s hope it doesn’t grow back too quickly.