School Myths and the Evidence that Blows Them Apart
by Melissa Benn and Janet Downs should be required reading for everyone concerned about high quality education for all. The negative narrative about public (state) schools holds, according to Downs
, that, “Public education is broken. Only competition, choice, standardization of outcomes and test-based accountability can fix it.”
Precisely that narrative also governs discussion of schools in the United States. It is a story line underpinned by international large-scale assessments (ILSAs) such as PISA, which pretend, in defiance of the scholars who created these assessments, that a single assessment number reveals all you need to know about national school systems. Officials throughout Europe and North and South America are making decisions about their schools on the basis of very questionable ILSA results.
Here are Ten Things You Need to Know about International Assessments. You probably know that they compare apples and oranges. Still, you may be surprised to learn that these assessments were never intended to create football league tables that line up and rank nations against each. Nor is the “international average” a weighted average representing all the students in participating nations.
There’s more in that vein, much more. The “Ten Things . . . ” grew out of an analysis, School Performance in Context: The Iceberg Effect
, completed in January by the National Superintendents Roundtable and the Horace Mann League in the United States. The analysis compared the educational systems in nine nations (Canada, China, Finland, France, Germany, Italy, Japan, the United Kingdom, and the United States) on 24 critical indicators divided into six broad dimensions: equity, social stress, support for young families, support for schools, student outcomes, and system outcomes. In many ways, the picture that emerges from the study of schools in the United States and the United Kingdom is similar.
Below is a table summarizing the results by the six dimensions for each of the nine nations. It’s been described as a sort of “consumer’s guide
” to the quality of national school systems. In this table, the number in each box represents the number of points each nation received within each of the six dimensions, with a maximum of 40 and a minimum of 8. The higher the number, the better. Blue is a positive finding; gray is neutral; and nations don’t really want too many negative maroon boxes. What stands out here? Finland is the only one of the nine nations with five blue dimensions. The United States is the only one with three maroon dimensions. And China is the only one of the nine nations in which it is impossible to draw conclusions on three of the broad dimensions.
Why should educators in England worry about an analysis completed in the United States? Apart from the fact that the study examines the United Kingdom, the most direct answer comes surprisingly from Brazil. Commenting on “Ten Things,” Professor Luiz Carlos de Freitas of the State University of Campinas agrees the rankings need to be “demystified.” Regarding this North American analysis, he tells his colleagues in Brazil, “Valem para nós.” — “This applies to us too.”
And so it does. It applies to every nation that misguidedly focuses on the numbers generated by a single assessment. Nowhere is the damage wrought by these international large scale assessments greater than it is in England and the United States.