Stories + Views

Posted on

27/02/13

go to 27 comments

Ofsted Dashboard Uses the Wrong Data

When Sir Michael Wilshaw showed me round Mossbourne some years ago one of the most impressive features was their use of student data. The school leadership and each individual teacher received reports, every six weeks, on student progress against expectation. No student could quietly slip behind and intervention could be quickly targeted at those that needed it.

Given that data was at the core of his success at Mossbourne it is no surprise that Sir Michael has introduced a data dashboard, with the very sound  aim of ensuring governors are well informed. However, in using the “expected progress” measure it is as likely to mislead governors as to inform them.

The Expected Progress Measure is Flawed

Ofsted is absolutely right to want to judge schools on the progress their students make and not on their absolute results. One secondary school with 65% achieving 5 A-Cs (including English and Maths) could be coasting on a good intake, while another only achieving 60% may be getting their students to make more progress, given where they started from at age 11. The use of the % achieving “expected progress” is intended to address that, and to stop schools being satisfied with getting students to a C grade.

The problem, as I outlined here, is that the “expected progress” measure is a very crude one. It is not based on the expected progress of each student but on a crude yardstick that all students, whatever grade they start on, should make three levels of progress. In fact how many levels of progress students makes is strongly influenced by their starting point. I reprint here the graph from the previous article. While for Maths over 80% of students with level 5 at age 11 make the expected three levels of progress, only 32% of those on a level 3 at age 11 make that progress.

A Tale of Two Schools

To see the problem, let’s take two extreme schools, High Start and Low Start:

At High Start, all the students arrived with a level 5 and they get 85% of these to a GCSE B grade and 25% to an A in Maths. Any decent governor would be aware that this school is under-performing, but the dashboard would put it in the top quintile for expected progress as only  B grade is expected on this measure for level 5 students.

At Low Start, all the students arrived with a level 3. 64% make 3 levels of progress, to a D grade, and many achieve a C. The school is getting twice the national average to three levels of progress, and many beyond that, is probably in the top 10% in the country for value added. However this dashboard will show the school as under-performing.

The “expected progress” measure does not tell us how well a school is doing. Whether a school is above average is, statistically, much more likely to be down to the intake of the school than actual progress made.

What Ofsted’s Dashboard Should Include

The irony is that Ofsted collects data that accurately reflects student progress. Its annual Raise Online report, for each school, gives a measure of value added in each subject. These are not based on the crude 3 levels of progress for all, but on what each student should be expected to achieve (given where they started at age 11), and whether they are above or below that expectation. if Ofsted were to based their dashboard on these value added figures, it would be genuinely useful information for governors and parents.

This morning, on the Today programme, Emma Knight (Chief Executive of the National Governors Association) talked about the more sophisticated data that many governing bodies use. Learning from our visit to Mossbourne we built a similar data tracking system to theirs at the school I chair. Every teacher knows, for each subject, whether their students are red (below expectation), yellow (at expectation) or green (above expectation). The Governing Body receives each term a report that shows, for each year and broken down by target groups, what % of are on target and above target. It ensures both the school and the Governing Body are aware of progress, and of any issues, long before the GCSE results – when it is too late to intervene for those students. This kind of system is probably  now quite common and is, I think, the kind of data that Emma was referring to.

I am hugely grateful to Sir Michael for what we learned at Mossbourne all those years ago (even though it seems very obvious now). And I do realise that Ofsted does not have the data to produce that kind of detailed dashboard for progress in all years  as its data being principally based on GCSE results. But I ask him to review this dashboard, and to ask whether it informs or misleads governors and others. A switch from “expected progress” to “value added” would provide a genuinely useful resource for all.

 

Share this page:

Comments, replies and queries

  1. Hi Henry. I have no doubt that some governors such as yourself could could make sensible interpretations of school data. But most won’t have such skills. The whole education system – maybe – the whole world has gone data analysis mad. Yet the vast majority of people have no idea either how to properly analyse data, how to interpret it within some level of statistical validity, how to differentiate whether a (collection of) summary statistics is something to report on, and worst, how to report on any”findings” they think (usually incorrectly) they have discovered. It is a difficult skill so why would they?
    My wife is now a governor and every month she gets pages and pages of RAISE online output. I scan over it for her – as a highly qualified and experienced mathematician / statistician – and, after careful thought, suggests she puts in straight in the fire.
    Most of the graphs are meaningless, much of the summary statistics are trivial.. and there is very little in there that could not be gleaned from a conversation with various teachers in the school.
    I would be very keen to see data analysis used sensibly in the education system. But as I emphasise again, it is not an easy skill. Too many people think that producing a few charts, the odd calculation in excel means they know what they are doing. But they don’t. So unless the various governing bodies in the country has data analysis / interpretation skills far in excess of the rest of the population, then this whole idea is doomed to failure.
    Having said all that, since the current education secretary and his minion used to be directors of thinktanks who are a prime example of people who generally have no idea about how to analyse data correctly, I fear that this will be foisted upon the school population and simply used as another “tool” to batter certain schools instead of correctly inform school decision making.

  2. Tubby Isaacs says:

    Henry,

    Did you see Wilshaw say this?

    “”The best governing boards get the balance right between support and challenge. They ask the right questions, whatever school they’re in – maintained schools, in individual academies, and especially in academy chains, where focused governance has brought about the greatest improvements.”"

    Is he aware of your analysis that improvement in LEA schools and academies is the same once you take into account the starting point of each school?

    Unless I’ve misunderstood him, this comment looks very dodgy. He’s promoting intelligent use of data?!

  3. Henry, what was that thing about lies, damned lies and so on? In between visiting the school I serve at as a governor today, I caught various bits and pieces of Mr Wilshaw’s actual words. Managing to contain my frustration when he likened some/a few/many (he failed to specify exactly) governors to the jury suspended in last week’s headline trial, I finally cracked when he talked about the lack of professionalism of governors.

    Is he unaware that that is what they are not, nor moreover were ever intended to be? Many governing bodies, especially those who are able recruit a full compliment, may well have professional people among their members with the time to devote to what has become a part-time ‘job’ in light of the way the role has been expanded, especially since the advent of Local Management of Schools almost three decades ago.

    MW clearly does not take the opportunity to research the facts behind his comments. Had he done so, he would have known that, far from the legal profession agreeing with the knee-jerk reaction of much of the media in commenting on the apparent limitations of the jurors in question, upon deeper reflection the wearers of funny wigs and flowing gowns acknowledged that there were some pretty profound observations put forward by the jury. Ethical/moral considerations may be beyond the man at the top.

    In the same casual manner, MW also misrepresented what might reasonably be expected of governors. In the event that collectively, they are likely to find the data difficult to interpret, as Paul Brown indicates, MW might be better employed in helping to clarify the confusions and inconsistencies you point to, Henry. Instead of announcing something remotely resembling a practically useful strategy, the current leader of Ofsted takes it upon himself to suggest another daft policy option for the education service to adopt unquestioningly.

    Exactly where does our illustrious Chief Inspector imagine that schools that are ‘failing’ pupils, in his words, will find the money to pay “super governors” when, sensibly, they need all the funds at their disposal to improve the outcomes for children. Moreover, how might they justify such payments by results? What about payment for the ‘best’ governors who do ‘a good job’ to keep ‘good’ schools on the straight and narrow?

    Must the ‘education market’ go the way of the financial markets, where only those CEO’s and ‘top’ managers who are paid enough will commit their loyalty to the organisation? Wake up Mr W! It’s not your job to make the task of recruiting governors harder. Think before you pontificate!

    And, Henry, you are quite right to help that bulk of governors, of whom I am one, to appreciate that, as is always the case, beware the damn lies.

  4. The problem is that the role of governors has changed from being supportive to being challenging. This is made worse when school become academies. The Academies Commission (2013) wrote that governors “might not be aware of or interested in assessing risks which are, in theory, attendant on their becoming directors of a charitable company.”

    The Commission’s report cited Steve Barker, Chair of Governors at Collingwood College, Surrey:

    ‘Most converters don’t know what to look at and often get big surprises post-conversion at the extent of new responsibility and bureaucracy associated with company law etc.’

    So, the DfE overstated the “advantages” of academy status (which are in any case largely illusory and most were enjoyed by schools pre-conversion) but under-estimated the pitfalls. Added to that we have the problems with interpreting data as outlined in Henry’s post and comments above. These could lead to governors challenging schools on a false reading of misleading stats.

  5. The “expected progress” measure fails to recognise that children progress at different rates and these will vary during each child’s education. Learning isn’t an uninterrupted upward process – it comprises vertical slopes, peaks, plateaux and troughs.

    Foisting a uniform “expected progress” on every child as if they were all the same will results in yet more misleading and damaging data being used to judge how schools are doing. However, it will not differentiate between the so-called under-performing schools (as measured by test results) which are doing a good job with their intake and the so-called high-performing schools which are getting good results because they’ve got a high proportion of previously high-attaining pupils.

  6. Being ‘professional ‘ is one of Ofsted’s new wheezes concerning governors.

    As they are now deemed to be ‘professional’ they can be slagged off for not doing exactly as Wilshaw orders, no doubt especially in relation to signing up for academy chain status. They can also be paid as ‘compensation’ for their ‘professional’ work. They can also be subjected to ‘training’. As people who are ‘professional’, paid, and ‘trained’ they would, naturally, be subject to expectations as might be achieved by being employed by an academy chain. All quite obvious really.

    ” In fact how many levels of progress students makes is strongly influenced by their starting point.”

    This should perhaps say ‘alleged starting point’.

    As usual with Ofsted it seems that junior schools are assessed against inflated KS1 results made up by infant schools. Or at least that’s the way I understand the ‘School Data Dashboard guidance’ document. Anyone know different?

  7. Roger Titcombe says:

    Is anyone taking account of these ever expanding marketisation costs? Now it is paid governors on top of executive directors that do no teaching, on huge salaries . Our local troubled academy has a ‘Director of Transformation’ (honest!). We have also forgotten the huge extra costs of Local Management of Schools imposed back in 1990. This was only introduced to facilitate competition between schools and to force schools to adopt a business model. As well as duplication in every school of jobs done better by a small number of experienced and highly competent staff at ‘county hall’ there are ever expanding ranks of new, non-teaching jobs. The old system was not just better for innovation, teaching and learning it was vastly cheaper. The only financial responsibility delegated to schools was departmental capitation. Headteachers were focussed entirely onto educational issues, with schools allowed to appoint teachers at various payscale levels on the basis of a formula driven by pupil numbers and other local factors under LEA control. The LEA dealt with the rest including accountancy, buildings, cleaning, school meals etc. Before competitive tendering and outsourcing (employing poorer qualified ‘operative’ staff on low pay to finance megasalaries of bosses and shareholder dividents) these tasks were done extremely well by LEA officers well known to school leadership teams. Teaching and learning was supported by LEA teams at no cost to the school, and as Hackney shows, this can result in high standards and appropriate challenge to school leaders and individual teachers. The original five overlapping pay scales with annual increments to reflect experience was entirely appropriate and effective as it ensured that middle and senior managers knew what they were talking about when it came to classroom practice. The complete absence of performance related pay and bonuses of any kind provided the cement that binded the staff together as a collegiate team, usually serviced through a high degree of collective decision making. Teachers were motivated by job satisfaction, fair promotion opportunities, decent, relatively modest pay and a secure pension that worked on the principle of current contributions (by the employee and employer) pay for current retirement benefits. For much of my career this system was in surplus, effectively subsidising the government.

    The present cost of the English education service has exploded. The 8bn Academies programme (just the central management costs alone) is overspent to the tune of 1bn at the last count. On top of that there have been endless expensive and inneffective centally dictated iniative ‘roll outs’, local reorganisations, closures of hundreds of schools with good accommodation replaced by new academy buildings, many of which are of poor quality with some funded by rip-off PFI schemes that will rob us taxpayers for generations to come.

    I could go on! The 1944 Education Act was so wise in so many respects, especially for keeping the government’s hands of schools and what is taught in them. All that was needed was some remodelling to recognise the replacement of the initial tripartite system by comprehensive schools that were becoming increasing popular and effective throughout the 1980s.

    This is not just nostalgia from a boring old fart (OK the old fart bit may be right). Marketisation isn’t working, standards are falling, costs are spiralling and our international competitor education systems are leaving us behind.

    It is shear madness but the naked emperor is still strutting his stuff to the cheers of acolytes and the willful blindness of the national media.

    I nearly forgot to mention OfSTED. What is the annual bill for this ever expanding sub-empire and how does it compare to the previous HMI service?

    I thought the recession required savings in public expendure?

  8. Henry,

    This is extremely helpful – thank you. Do you have a chart showing % pupils expected levels of progress at KS2 as well as KS4 2012 please? I assume there is a similar problem?

    • Thank you Henry for another valuable insight into the debate on school performance data. I can hopefully make a useful contribution in answer to Gaynor’s question above.

      In primary schools all pupils are expected to make two levels of progress from KS1 to KS2. Nationally in 2012 the proportions of all pupils making 2 levels progress were
      - 89% in English
      - 90% in Reading
      - 90% in Writing and
      - 87% in Maths.

      But, as at KS4, success differs hugely according to the starting point. It is much harder to get a pupil from a Level 2c at KS1 to a Level 4 at KS2 (2 levels progress) than it is to get a Level 2a pupil to a Level 4 (also deemed to be 2 levels progress). Pupils graded 2c, 2b and 2a are all expected to get a Level 4. However this represents a full 2 levels of progress for those on 2c but only 1.33 for those on 2a.

      In 2012 in England 99% of pupils with higher prior attainment (those at Level 2a) made 2 levels of progress in Reading, but only 81% with lower prior attainment (those on level 2c). The difference is slightly greater in Writing (99% at 2a and 80% at 2c) and much greater in Maths (99% at 2a and 68% at 2c).

      Once again, as Henry has pointed out previously with regard to secondary schools, the effect of this is that whether a primary school is deemed to be making sufficient progress is much more likely to be defined by the intake than by the value it adds.

      As Henry explained in “Making Expected progress”: A Deeply Flawed Measure this means there is a somewhat random element in the ‘expected progress’ measure.

      This random effect is exacerbated when the concept of ‘more than expected’ (i.e. 3 levels) progress is introduced. Ofsted Inspection Teams are currently very interested in this measure and evidence suggests are using it as a ‘deal breaker’ in determining their judgement on achievement. For a school to be judged as having good or better achievement they are expected to have a greater than average proportion of pupils making ‘more than expected progress’.

      The release of the validated 2012 KS2 data in RAISEonline at the end of February 2013 indicates that the proportions of pupils making ‘more than expected progress’ vary substantially depending upon starting points. In 2012 in England:
      - in English 47% of pupils with higher prior attainment (L2a) made greater than expected progress but only 9% with lower prior attainment (L c);
      - in Reading 62% at 2a and 19% at 2c;
      - in Writing 55% at 2a and 6% at 2c;
      - in Maths 53% at 2a and 6% at 2c.

      The random effect is even further exacerbated for those schools with very high levels of prior attainment at the end of Key Stage 1, i.e. a high proportion of pupils at L3. In 2012 in England the proportions of L3 pupils making more than expected progress were:
      - 0% in English;
      - 1% in Reading;
      - 6% in Writing; and
      - 14% in Maths.

      In order to demonstrate ‘more than expected progress’ a L3 pupil at KS1 has to achieve a L6 at KS2. So far the introduction of optional L6 testing at the end of KS2 has not been widely embraced by primary schools. (That said there is an interesting variation between subjects as can be seen from the figures above.) However, as schools seek to maximise their chances of securing the increasingly important Ofsted good or better judgement it is likely that the proportions of pupils nationally being entered for and achieving L6 will increase. I am sure there will be huge implications for secondary schools in being able to demonstrate ‘expected’ and ‘more than expected’ progress when they have intakes with a significant proportion of pupils already having achieved Level 6.

      Finally, the new Ofsted School Data Dashboard which went live on 27th February 2013 states that Inspectors will now expect school governors to be familiar with the measures presented in the School Data Dashboard for their school. (Ofsted Data Dashboard FAQs). The following day, Ofsted published on their web-site the latest version of their subsidiary guidance for inspectors. This states that inspectors should consider whether governors … understand and take sufficient account of pupil data, or whether they are misled by ‘headlines’.

      Hmm. I wonder where Governors might be introduced to potentially misleading headlines?

      • Very informative response, Peter. Thanks for that.

        The introduction of Level 6 at KS2 introduces a new issue with the data. Oddly, while everybody else is expected to make 3 levels of progress, those who arrive on level 6 are only expected to make 2.

        So even though 67% of students who achieve 5a at age 11 in Maths go on to achieve an A* at GCSE, a level 6 student has made the expected progress if they achieve a B.

        Baffling.

  9. Tubby Isaacs says:

    It’s the Big Society in reverse, isn’t it? People free now, to get paid.

    Funny because we were led to think that governors didn’t have any power at all, what with these LEA bureaucrats rampaging around their schools. Sounded to me like we might as well have installed a bunch as showroom dummies on governing bodies for all the power they had.

  10. On the dashboard, the element that is useful is the comparison to similar schools. This slightly alleviates the problem with using the flawed expected progress measure.

    So a grammar school taking 5a & 5b students would be under-performing, against national figures, if only 90% were making expected progress in Maths. A school with a principally level 3 and 4 intake will be over-performing against national figures if it achieved 60%.

    So the comparison of the school to national figures for progress is meaningless. But the comparison to similar schools is more useful – though would be really informative if best on Best 8 value added and not on expected progress.

    • Thank you. Interesting and baffling indeed.

      I agree the ‘all schools’ comparison is largely meaningless. I am not sure about the validity of the ‘similar schools’ comparison yet as I would like to know what is being used as the determinant of a ‘similar average level of attainment’. I emailed Ofsted yesterday with a request for further information.

      The Data Dashboard Explanatory Notes define a similar level of attainment as the schools’ ‘average prior attainment score’. I have read the FAQs and the Guidance Document and none of these have any further detail.

      The definition ‘average prior attainment score’ suggests to me that for primary schools they are perhaps using overall Average Points Scores (APS).

      The reason I am sceptical about the use of overall APS is that it potentially introduces another ‘random effect’ which shows up in the Value Added (VA) measures. The effect comes from combining VA calculations for the 3 separate subject areas of Reading, Writing and Maths. This is illustrated in the example below from actual data in 2012 RAISEonline.

      Pupil A and Pupil B both had exactly the same overall APS prior attainment of 19.7 at the end of KS1 in 2008. At the end of KS2 in 2012 Pupil A had an overall APS of 32.5 and Pupil B an overall APS of 32.3. Pupil A made 12.8 points progress and Pupil B 12.6 points. Both pupils therefore made ‘better than expected progress’ in that 12 points is the equivalent of 2 levels. However, only one of these pupils contributed positive value added to the schools data and counter-intuitively it wasn’t the one who made the greater progress. The overall VA for Pupil A was -0.7 and for Pupil B +0.4. The reason for this, without going into a very long explanation, stems from the fact that the two pupils had different subject score profiles at KS1 (as below).
      Pupil A – Reading 21 points, Writing 17 points, Maths 21 points. APS 19.7
      Pupil B – Reading 21 points, Writing 21 points, Maths 17 points. APS 19.7

      The effect above also comes into play at school level where individual pupil scores are averaged to produce school APS scores. Hence, there is a ‘random effect’ in the VA measure when it is aggregated across subjects. If Ofsted deem schools to be ‘similar’ based on Average Points Scores they are potentially far from similar if their individual subject points are not the same.

      • Roger Titcombe says:

        All this is true, however the essential weakness is in using SATs scores as the baseline in the first place. In LAs like Cumbria and Hackney where all pupils take SATs, it is possible to undertake a very interesting study by comparing the intake SATs and CATs scores of pupils from different primary schools. When we did this, admittedly some years ago, there was a clear pattern. The primary schools most under threat of failing to meet the floor targets had SATs scores that were the most inflated compared to the CATs/SATs pattern for the LA as a whole. It is the high stakes test effect. If you up the stakes, you can get the results, one way or the other. It is the fundamental flaw of all target driven improvement policies. It is a double disadvantage for the unfortunate children. They have had less high quality developmental teaching (replaced by cramming), which is what they really needed. They also risk being targeted by secondary teachers as underperforming (lazy?), when they really need teaching matched to their true levels of understanding. Is this still happening? Hackney secondaries have all the data they need to find out.

  11. IngenueGov says:

    I would never support our governing body being paid because we’re very weak..but this is what the school itself is used to..it does not expect any significant contribution from the GB , .For example I asked at the last meeting why we didn’t have the mandatory account of pupil premium expenditure on our website yet and should I review what other schools publish ; I got a few sniffy remarks about private sector not understanding the pressures of the education sector. . I wouldn’t pay us in beans.

    • On that basis your governing body is entirely at fault.

      It is the responsibility of the governing body to ensure that the school meets required obligations. If your Chair doesn’t understand that s/he needs replacing. If your headteacher can’t or won’t get the work done (with or without governor help) then the governors should invoke capability procedures.

      Not that you should be paid anyway.

      • IngenueGo says:

        Thanks agov..exactly what I thought…will grit my teeth and push them on it ….

      • Tim Woodcock says:

        Capability – The use of capability creates more problems that it solves. Children are left without a trusted teacher often for months on end and the problems in their class increase with each passing day. Other teachers look at the teacher who is forced out of the school through the process and question whether they should stay in a school where capabilities are used and pretty soon the school finds itself relying on rather too many supply teachers. Often the teachers who are performing will leave first because they can leave more easily and the problem you were trying to tackle becomes ever greater. It is far better to look at genuine support. If the capability procedure was a guenine support procedure it would be called a support procedure.

        • Brian says:

          While not disagreeing with some of your points Tim, there are rather a lot of sweeping generalisations. For example in the LA where I live capability or competency procedures (the terms are often used interchangeably but are actually very different) are implemented only after a significant period of support has failed to resolve the problems. If a teacher is consistently failing to provide an acceptable standard for his / her pupils then it would seem that leaving them in place because the children are ‘left without a trusted teacher’ isn’t a reasonable position take take.

  12. Kevin Heafield says:

    I appreciate this is not related to the progress measure, but I was very surprised that they labelled the science measures the way they did. Parents will assume like is being compared with like, when it is meaningless. A school that only enters 16% of the cohort, comes out in the highest category, and the accompanying text says ’87% of pupils achieved 87% A*-C’ which gives a completely false picture.

  13. Norizan Manaf says:

    Thank you. I am a school governor and find all these comments interesting and some relevant.

  14. Gaynor Cashin says:

    Peter
    Thanks you for providing a really helpful example about using subject profiles to measure value added showing the overall VA for Pupil A was -0.7 and for Pupil B +0.4 because the two pupils had different subject score profiles at KS1 even thought they had the same APS.
    I appreciate this is difficult to explain without a long, probably technical explanation but could you give it a go please?! I am sure there are lots of us that would be very grateful

    • Gaynor this is my understanding of how it works.

      The APS improvement of Pupils A and B from KS1 to KS2 (12.8 and 12.6 points respectively) is a measure of the absolute progress they have made from one point in time to another. Value Added (VA) is a measure of relative progress. It uses a statistical model which compares the actual progress of individual pupils with the outcomes in the same year of all other pupils with the same prior attainment (the Estimated Points Scores or the Predicted Scores). VA is calculated for English and Maths individually and averaged for the Overall measure.

      VA is calculated as the (Actual – Predicted) + 100. The +100 is included so that all primary school VA scores are centred on 100. At KS2 a pupil who made exactly the same progress as the average for all pupils nationally with the same KS1 APS would have a VA of 100.0.

      Here is how the calculations work out for Pupil A and Pupil B.

      In English Pupil A had an actual KS2 points score of 30.5 and an Estimated Points Score of 32.5 giving a difference of -2.0. In Maths Pupil A had an actual KS2 points score of 34.4 and an Estimated Points Score of 33.8 giving a difference of +0.6. Averaging the English and Maths scores gives an actual Overall VA of 32.5 and an Estimated Overall VA of 33.2, resulting in an Overall VA for Pupil A of (Actual 32.5 – Predicted 33.2) +100 = 99.3.

      In English Pupil B had an actual KS2 points score of 32.1 and an Estimated Points Score of 32.8 giving a difference of -0.7. In Maths Pupil B had an actual KS2 points score of 32.5 and an Estimated Points Score of 31.0 giving a difference of +1.5. Averaging the English and Maths scores gives an actual Overall VA of 32.3 and an Estimated Overall VA of 31.9, resulting in an Overall VA for Pupil B of (Actual 32.3 – Predicted 31.9) +100 = 100.4.

      Hopefully this explains how Pupil B contributed positive value added to the schools data whilst Pupil A contributed negative value added despite making more ‘absolute’ progress.

      A school’s VA is the aggregate of each individual pupil’s VA, statistically adjusted by application of ‘the shrinkage factor’, an adjustment that provides a better estimate of VA scores for schools with small numbers of pupils.

      My personal view is that VA is a potentially useful measure for analysing performance in individual subjects. However, it is much less useful when English and Maths are combined in the ‘overall’ measure as the level of abstraction obscures interpretation of the data.

      Further information on the methodology of how VA is calculated and an interactive ‘ready reckoner’ are available in the Ofsted RAISEonline documents library under ‘How Ofsted and DfE analyse your data’ or from the DfE website.

      Hope you find the explanation useful.

  15. BelleCat says:

    Thanks for this – I’m still head scratching! Should I consider the data dashboards to be say, an 80% reflection of my school in comparison to “all” and “similar” ?

  16. I only came across this site by chance this evening and have found its detailed comments most helpful and supportive.

    I have just chaired a meeting of our governing body and we had to deal with a complaint from a parent who wanted to know why we were in the lowest 40% quintiles for most of the categories even though our SATs results exceed national levels and we get 90%+ L4 and above and around 40% L5 across maths and English..We are a junior school and have no control over the teacher assessed scores that pupils bring with them. Explaining this to parents is not easy. At least the scores that a KS3 pupil brings with them are externally assessed. Our answer to this is to federate and we will do so on April 1st. It will be interesting to see if this makes any difference.

    I have several questions about the dashboard data and comments would be very helpful.

    1. Who decided to change the variable used to compare schools from free school meals and similar to progress and why.

    2. How does an infant school measure progress when there is no starting point score?

    3. Why is no information provided on the number of schools, out of the 100+ chosen for comparison purposes, that appear in each quintile. We are simply told in the notes that the numbers in each category can vary. So you can get 95.6% and be in the top 40% and get 95.4% and be in the lowest 40%. Try explaining that to a pushy aren’t!

    4. Is it possible to ave access tonal the data used to produce these seriously flawed statistics?

    I’ve lots of other worries but I will stop there.

    • Roger Titcombe says:

      Where pupils move through the system from infant to junior to secondary to post 16 there is a vicious perverse incentive to inflate results/levels at every stage. The only solution is to stop using individual test results/levels for judging schools (they have no statistical validity anyway) and confine them to their proper purpose of helping teachers in the development of their pupils.

  17. Tim Woodcock says:

    The problem with data is less its accuracy than how it is used. This in not to say accuracy is not important if data is used to measure something then it is vital that it is accurate. The real question is whether it is actually necessary. Is the use of data actually improving education or simply changing the way teachers teach. Is it better to achieve 100% A* or to have a class that loves learning with each student at their own level. While we are drawn into the data debate are we in fact letting our children down.

    This data is being used to push thousands of teachers out of the profession and leaving children without continuity in their education. Many thousands of children will not know who their teacher will be tomorrow because data and statistics are being used often incorrectly as evidence of competence. Schools are being closed, forced to become academies sometimes directly forced and sometimes feeling they have little choice and meekly following that route.

    Questioning data is vital. Questioning the validity of the use of data is vital. However is it not more important that we question whether the education of our children should be predicated on data or on how young minds function?

Want to follow comments on this post? Use the RSS feed or subscribe below

Reply