At the moment you can read it here
, but I think there may be a limit on how many people can access it for free.
These are Warwick Mansell's concluding remarks to his academic paper 'Misleading the public understanding of assessment: wilful or wrongful interpretation by government and media' for the Oxford Review of Education
At the outset of this paper, I suggested that there were at least two broad categories of misuse of assessment data by the media and policy-makers. The examples
presented illustrate how some of these arise and how they may be wilfully promoted or naively misinterpreted, resulting in sensationalist outputs or political
point-scoring. Whether wilful manipulation or naı¨ve misinterpretation, I would argue that a considerable reduction in their incidence and impact could be effected
if two remedies were applied. First, as argued elsewhere in this special issue, there is a need to communicate the possibility of measurement error in assessment
results to a wider audience. Ignoring the possibility of measurement error is onlyone type of possible misinterpretation of results in the media and by policymakers;
its dangers are perhaps best illustrated in the ‘stickmen’ example above.However, being more open about the limits on the degree of certainty behind assessment statistics would make a considerable contribution to the public debate. It is not an easy remedy but a number of recent studies have sought to quantify the uncertainty around National Curriculum test level judgements for individual children (Ofqual, 2011). Consideration should be given to publishing error misleading the public understanding of assessment margins around individual test judgements. ‘Health warnings’ around schoolby- school statistics should emphasise that they can provide only a partial appraisal of the quality of education that goes on in each institution, and that they are often significantly influenced by factors beyond a school’s control (for example, see
Mansell et al., 2009). Second, and perhaps more fundamentally, assessment experts need to engage more fully with public discussions relating to assessment information, and must speak out if they feel such information is being misused or misinterpreted. For several years now, I have worked in the space between the assessment community, on the one hand, and the mainstream media, on the other. Given the wealth of knowledge and technical expertise which exists in the former, and the appetite for assessment-related news in the latter, it is surprising in my view that there is not greater interaction between the two. With assessment data now right at the heart of questions about whether schools are succeeding or failing to educate young people effectively, the need for such engagement is pressing.