It had been long understood that it was meaningless to compare all schools against each other in respect of examination results. It was clear that schools in poorer areas fared less well in exams.
So that schools might better judge their performance (and so external agencies such as the Inspectorate might also judge them) the concept of PCA (Principal Component Analysis) comparator schools was devised.
Comparator schools were identified by analysing the percentage of pupils with given characteristics: mothers with a degree; households where the main householder has never worked; registration for free school meals; living in the 15 per cent most deprived data zones; living in settlements of over 10,000 people. With the exception of the first component, these are all indicative of deprivation.
The theory was that schools serving communities with similar deprivation levels would be compared with each other and not, unfairly, with schools serving more affluent areas. It remained true of course that media publicity on examination results continued to centre on achievement at the top end – proportions gaining 5+ Credit SG awards in S4 or proportions achieving 3+ Highers in S6. The hope was that the PCA method would allow a better internal judgment not only of how well a school was performing but, since comparator data could be applied to performance at subject level, the quality of each subject area or department. Her Majesty’s inspectors have certainly dwelt studiously on such statistics when inspecting schools. The PCA method even allowed for comparisons to be made at local authority level.
The Association of Directors of Education in Scotland (ADES) in its evidence on attainment to the Scottish Parliament’s Education and Culture Committee has, however, challenged the very basis on which such comparisons are made.
ADES has taken two similar schools, School A and School B. Comparison of the 2010-11 S4 rolls show that School A and School B are each other’s closest PCA comparator, and indeed are two of the closest schools in Scotland. The proportions of their pupils residing in areas in the two lowest deciles in respect of deprivation are almost identical. Nonetheless, School B has significantly better results for 5+ awards at level 5 by the end of S4 than school A.
School A would be judged harshly by the HMIs.
The major statistical difference between the two schools, however, is not in relation to deprivation but to affluence. School B has more than three times as many pupils residing in the least deprived, or most affluent, decile. In other words, each school’s pupil population has a similar proportion of learners living in very poor areas but School B has a far greater proportion living in very affluent areas.
Of major significance, when the pupil cohorts of both schools are broken into their decile cohorts and their attainment measured by SQA tariff points, the schools perform very similarly. In the top four deciles, School B marginally out-performs School A but School A substantially out-performs School B in the lowest decile.
The current PCA method unfairly favours local authorities or schools with high proportions living in affluent areas. Once again, it is a matter of ‘To those that have shall be given’. Perhaps, however, as well as questioning the validity of the particular statistical tool, the Inspectorate and the Scottish Government might question why the inherent bias has taken so long to be spotted. They might also reassess their almost total reliance on such arithmetical measures of quality.
Alex Wood, a retired headteacher, is an Associate at the Scottish Centre for Studies in School Administration and a freelance writer