'The current measures have some weaknesses' said the National Audit Office in its recent report*. But what were these ‘weaknesses’ in measuring school attainment?
The Department for Education and academics told the NAO they were concerned these measures were unfair to schools in challenging circumstances. This seems to clash with propaganda pushed by the Government that there are ‘no excuses’ for failing to reach mandatory and ever-rising benchmarks. It’s encouraging someone at the DfE recognises judging schools merely on results can be misleading and there might be situations, especially when combined, which handicap schools. It’s a pity this insight doesn’t reach ministers.
The NAO noted the DfE was introducing a new progress-based measure from 2016. This was designed to be fairer than judging schools on test results. But the NAO warned these new measures would in the short term make it difficult to measure performance year-by-year. At the same time, Ofqual warned that changes in exams may 'affect national and school-level trends'.
That’s unlikely to stop politicians making such comparisons and trying to identify trends. For example, DfE number crunchers said it wasn’t possible to compare Key Stage 2 writing tests with earlier years because the test fundamentally changed in 2012. But that didn’t stop the DfE press office from comparing the 2014 SAT results with 2009 in one of its press releases. Neither did it stop the Tories from issuing party literature making the same erroneous comparison.
In other words, the many changes, all running concurrently, will make it impossible to measure whether performance has risen or not. And it doesn't address the issue of whether a rise in ‘performance’ is actually a rise in the quality of education offered in schools.
As we’ve seen in the past, schools can adopt strategies which raise headline results but don’t have a positive effect on the education pupils receive. These include teaching to the test, overuse of equivalent exams (this scam’s been reduced, fortunately), ignoring certain subjects or other important skills and using methods such as tweaking oversubscription criteria to put off applications from pupils who are likely to reduce a school’s league table position.
Neither does it address an issue which will no doubt become prominent when the performance measures kick in: whether it’s possible to measure ‘progress’ fairly. Children aren’t identical – they don’t develop or ‘progress’ uniformly. They have growth spurts; they enter puberty at different ages; their learning can shoot forwards, stand still and even drop back - all at different times. The way children develop is affected by many circumstances combining nature and nurture. Expecting children to progress at the same rate could be as unfair on schools as the present flawed measure focusing merely on exam results.
*National Audit Office
: Academies and Maintained Schools: Oversight and Intervention, October 2014 (page 21)