The perennial problem of primary high attainers


This post features analysis of the 2016 primary transition matrices, but mostly raises awkward questions.



Publication of the 2016 primary performance tables is imminent, together with revised national figures for achievement of the KS2 higher standard and new breakdowns by pupil characteristics, including receipt of pupil premium.

We also await the results of the TIMSS 2015 international comparisons study, which will show whether the proportion of pupils achieving the advanced benchmark in maths and science at age 9/10 has improved since 2011.

But the evidence already released paints a worrying picture of primary high attainment in 2016.

This includes:

  • The provisional data on high scaled scores in SFR39/2016 and

This short post synthesises that evidence, providing a staging post from which to launch analysis of the new material.

The emerging narrative belies Ofsted’s repeated assertions that underachievement amongst the most able emerges in non-selective secondary schools, implying that high attainers typically make strong progress in primary schools, only to tread water at KS3.

But primary underachievement is a perennial problem. Five years ago I was reporting:

  • ‘Almost 4 in 10 high attaining primary pupils did not achieve the expected 2+ levels of progress between KS1 and KS2 in English and maths together;
  • Only 77% of high attaining primary pupils made the expected 2+ levels of progress in English, significantly less than the percentage of ‘middle attainers’ who did so (89%) and slightly less than the percentage of ‘low attainers’ (80%)
  • According to a report in the Daily Mail, some 1,300 high attaining primary pupils spread across 800 schools remained at Level 3 at the end of KS2, having been at Level 3 at the end of KS1.’

Individual pupil progress has gone from the accountability framework, but every pupil still counts towards the aggregate progress measure. Good schools rightly want to expose and address any pockets of under-performance.

When time is scarce and numbers small it may still be tempting to think the highest attainers need least support, but that can often prove a false economy.

The problem has been amplified by curriculum and assessment reform, which has increased the level of demand – sometimes deliberately and sometimes more by accident than design.

This will have restricted high attainment, (though whether the level of demand has really increased at the topmost reaches is a question to which I cannot find the answer).

That aside, we might expect a dip in overall performance by virtue of the sawtooth effect, and retro-fixing the problems with primary assessment might bring some further marginal improvement.

Even after making these allowances, it is hard to escape the uncomfortable truth that primary high attainment remains in short supply.


High scaled scores

In September I highlighted  the provisional 2016 KS2 attainment data in SFR39/2016.

This showed that:

  • While 53% of learners reached the expected standard in the aggregate headline measure for reading, writing and maths combined, only 5.4% – about 1 in 10 of those – achieved the higher standard. This required a scaled score of 110 or higher in both the reading and maths tests, plus a ‘working at a greater depth within the expected standard’ (WAGD) judgement in teacher-assessed writing. Since the test scale stretches from 80-120 (and noting in passing that WAGD attracts a nominal score of 113 for the purpose of calculating writing progress) the higher standard marks off roughly the top quartile of the potential performance range.
  • In the separate assessments some 19% achieved the higher standard in reading and 17% did so in maths, while 15% were WAGD in writing. Additionally 23% achieved the higher standard in the grammar, punctuation and spelling (GPS) test, which does not contribute towards the aggregate headline measure.
  • 7% of eligible learners achieved a scaled score of 115+ in reading, so did 6.3% in GPS, but only 4% did so in maths. Some 1.1% reached the scale maximum in reading, 0.8% did so in GPS but only 0.3% in maths. Results are not comparable with 2015 outcomes but these variances are markedly different to the pattern in KS2 L6 test performance under the old regime, when maths led the field and reading barely registered.
  • There is also a noticeable difference in the ratios between performance on the aggregate measure and on the separate assessments. If the 2015 ratios for achievement at L5+ had been replicated in 2016 for achievement of the higher standard, some 10% of learners would have achieved the aggregate headline measure, almost double the 5.4% recorded. This hints at newly-emerging issues in securing all round high performance, as opposed to a ‘spiky profile’.

Transition matrices

At the end of October RAISEonline published 2016 national KS1-2 transition matrices.

This was something of a surprise since, when I asked in June 2016, no one seemed to know whether these would continue to appear.



The new matrices are built around 21 different KS1 prior attainment groups, replacing the old national curriculum levels. Hence the distinctions are much finer than previously and the groups typically much smaller.

The derivation of these groups is explained in Primary school accountability in 2016.

Pupils’ 2012 KS1 assessment outcomes are given a point score equivalent (27 for L4, 21 for L3, 17 for L2A etc.). Scores for reading, writing and maths are combined to give an average points score (APS), but maths is given double weighting, equivalent to reading and writing combined, so balancing the contribution made by English and maths.

The matrices cover all state-funded schools, hospital schools and non-maintained special schools.

Those for reading and maths show the percentage and number of pupils not meeting the expected standard, meeting the standard and achieving a high scaled score (110+) in the relevant KS2 test.

The writing matrix shows the percentage and number working at the expected standard and WAGD.

All the teacher assessment gradations for pupils working below the expected standard are also included, both for writing and for pupils not entered for the maths and reading tests.

In the calculations below I have included all pupils entered for the maths and reading tests and, for writing, all the teacher assessment categories excluding those pupils with no result.

At the top end there are six prior attainment groups with an APS of 17.0 or higher, numbered 16 to 21. Only two are for pupils with an APS of 21 or higher: group 20 for those with a score from 21.0 to less than 21.5 (broadly L3 equivalent) and group 21 for those with a score of 21.5 or higher (above L3)

The pupil number matrices allow one to calculate the size of the groups:

  • Taken together, groups 16-21 account for some 190,000-200,000 pupils in each assessment.
  • Most of the groups at the top end contain between 30,000 and 50,000 pupils – for example there are some 46,000 in group 16.
  • But groups 20 and 21 are outliers. Group 20 is the largest of all, accounting for some 54,000 pupils in each assessment. Conversely, group 21 is tiny containing only 326 pupils.

The analysis below features results for groups 16-21 and groups 20 and 21 combined.

The three matrices containing pupil percentages are reproduced below, but the analysis relies on the parallel pupil number matrices.






The reading transition matrix shows that:

  • About 6.5% of those taking the test in groups 16-21 did not meet the expected standard. The failure rate is 13% in group 16 and, as might be expected, falls for each subsequent higher group. In groups 20 and 21 combined the failure rate is only 1.4%.
  • Some 41% of those taking the test in groups 16-21 achieved the higher standard. The success rate is 23% in group 16 and, as expected, increases for each subsequent higher group. In groups 20 and 21 combined the success rate is about 64%.
  • Over 20% of those achieving the higher standard (and who took the test) were in a prior attainment group below group 16. There was even a handful in prior attainment groups 5 (broadly KS1 L1 equivalent) and below.
  • Only in groups 20 and 21 did the majority of pupils achieve the higher standard – 64% in the case of group 20 and 90% in the case of the tiny group 21.
  • But almost 35% of groups 20-21 (19,064 pupils) were only at the expected standard. This is a disappointing outcome for the pupils with the highest prior attainment.






The maths transition matrix reveals that:

  • Some 5.2% of those taking the test from groups 16-21 did not meet the expected standard. The failure rate is 12% in group 16 and falls for each subsequent higher group. In groups 20 and 21 combined the failure rate is less than 1%. 
  • Approximately 37% of those taking the test in groups 16-21 achieved the higher standard, four percentage points fewer than did so in reading. The success rate is 16% in group 16 and increases for each subsequent higher group. In groups 20 and 21 combined the success rate is 60%, also four percentage points lower than in reading.
  • Some 18% of those achieving the higher standard and who took the test came from a prior attainment group below group 16, slightly less than in reading. A handful came from groups 3-5.
  • As with reading, only groups 20 and 21 had a majority of pupils achieving the higher standard – 59% in the case of group 20 and 89% in the case of group 21. The first of these results is significantly lower than for reading, the second very similar.
  • But about 40% of groups 20-21 (21,830 pupils) were only at the expected standard, five percentage points higher than for reading and so an even more disappointing outcome for the highest prior attainers.





  • Some 3.2% of those with a teacher assessment result from groups 16-21 did not meet the expected standard. The failure rate for group 16 is about 5.6% and falls for each subsequent higher group, reducing to 0.6% for groups 20 and 21 combined.
  • About 34% of those with a teacher assessment result from groups 16-21 were WAGD. The success rate in group 16 is 18% and rises for each subsequent higher group, reaching about 58% for groups 20 and 21 combined.
  • Of those who were WAGD, roughly 15% of those with a teacher assessment result came from a prior attainment group below group 16. Hardly any came from groups 5 and below.
  • Only groups 20 and 21 recorded a majority of pupils WAGD – 57% in group and 75% in group 21. These percentages are significantly lower than the percentages from these groups recording a higher standard in reading and maths.
  • But approaching 42% of groups 20 and 21 combined (22,612 pupils) were working only at the expected standard.


Average scaled scores by KS1 prior attainment group

The primary school accountability document also helpfully provides the national average scores for each prior attainment group. The average scores for groups 16-21 are:


Group KS1 APS KS2 average

reading score

KS2 average writing score KS2 average maths score
16 17 to <18 105.6 104.1 105.0
17 18 to <19 106.8 104.7 106.3
18 19 to <20 108.0 105.8 107.5
19 20 to <21 109.0 106.1 109.4
20 21 to <21.5 111.6 108.7 110.6
21 21.5 or more 115.7 110.4 114.5


This shows that:

  • Except for group 19 the KS2 average reading score is slightly ahead of the average maths score. The gap increases slightly amongst the highest attaining groups.
  • The gap between the average writing score and the average scores in reading and maths increases markedly for the highest attaining groups.
  • Even amongst the very highest prior attainment group the average writing score is somewhat lower than the nominal 113 points attributed to WAGD.
  • In both reading and maths the average score exceeds the higher standard of 110 only amongst prior attainment groups 20 and 21.

What will the forthcoming data add to this picture?

Given the curriculum and assessment changes, reliable data about trends will be hard to come by. TIMSS results predate the reforms and will only reveal changes between 2011 and 2015.

For high attainers the critical outcome will be the percentages of 9/10 year-olds achieving the advanced benchmark in maths and science respectively. Those percentages were 18% and 11% respectively last time round.

I concluded elsewhere that continued improvement on the existing trajectory in maths is necessary to give the government a fighting chance of achieving ‘best in Europe’ status by 2020, in line with its target, at least as far as high attainers are concerned. Radical improvement is necessary in science, contrary to the prevailing downward trend.

The new SFR will give revised national, regional and local authority figures including:

  • The percentage (and number) achieving the aggregate higher standard headline measure, the higher standard in each test and WAGD in writing teacher assessment, each broken down by gender.
  • The percentage (and number) achieving each scaled score on each test, also broken down by gender.

It will also supply further national breakdowns by pupil characteristics, which should include:

  • KS2 attainment by prior attainment at KS1 – including the higher standard aggregate headline measure, the higher standard in each test and WAGD in writing. This will presumably utilise the 21 prior attainment groups described above, and so effectively reproduce the transition matrices (though possibly for a subtly different range of schools, since the 2015 version excludes hospital and non-maintained special schools).
  • The percentage of disadvantaged and non-disadvantaged pupils and the percentage of FSM and non-FSM pupils achieving all these measures at the higher standard/WAGD, each broken down by gender.
  • Possibly the percentage of FSM and non-FSM pupils achieving all these measures broken down by ethnicity and gender, although the 2015 version reports only the expected standard this way.

The SFR should also include the primary disadvantage gap index, showing whether the gap between all disadvantaged and non-disadvantaged pupils has fallen compared with 2011-2015.

But it is unclear whether there will be any supplementary analysis to clarify the trend in closing excellence gaps amongst high attainers.

In place of distinctions based on KS2 fine grades it will presumably substitute the numbers and percentages of disadvantaged and all other pupils achieving each average scaled score.

Since fine grades cannot be mapped on to average scaled scores this will enable only crude comparisons with 2015 outcomes, when:

  • 11,639 disadvantaged learners achieved a fine grade level of 5.5 or above compared to 70,330 non-disadvantaged learners – six times as many.
  • 5,621 disadvantaged learners achieved 5.8 or higher, set against 39,857 non-disadvantaged learners – seven times as many.
  • 2,578 disadvantaged learners achieved 6.0 while 21,957 non-disadvantaged learners – more than eight times as many – did so.



Ahead of the primary performance tables, the accompanying SFR and TIMSS 2015 results, there is worrying evidence that:

  • The proportion of all learners achieving the aggregate higher standard headline measure in 2016 is much lower than it ought to be, at only 5.4%.
  • The proportion of the highest prior attainment groups going on to achieve only the expected standard in each of maths, reading and writing is much higher than it ought to be, ranging from 35-42%.

Yet the matrices also reinforce the strong correlation between KS2 success and high prior attainment – in each case at least four out of every five pupils achieving the higher standard or WAGD was from one of the top six prior attainment groups.

The sawtooth pattern predicts some improvement across the board over the next few years, as teachers become more familiar with the reformed curriculum – and as assessment teething problems are eventually eradicated.

But the sheer scale of high attainers’ underachievement cannot be excused entirely by the turbulence of reform.

What then are the causes?

  • Are some teachers still too ready to assume the most able need less support?
  • Are accountability reforms, designed to give equal value to every child’s achievement, insufficient to counteract this tendency?
  • Is the removal of individual pupil progress targets having a deleterious effect?
  • Are some high attainers suffering the limitations of teacher subject knowledge?
  • Is top-end differentiation effective? Is the misinterpretation of mastery at fault?
  • Is Ofsted too focused on the secondary sector?

When high attainers are underachieving, disadvantaged high attainers are typically the worst affected. Pupil premium should be ameliorating this by helping to eradicate excellence gaps, but the gaps remain stubbornly wide. The forthcoming data should illustrate the 2016 position but will likely obfuscate the trend.



November 2016











6 thoughts on “The perennial problem of primary high attainers

  1. Given the changes in what is expected, how it is tested, the wild variation in Writing moderation, the shift in thresholds, and the arbitrary ‘high score’ threshold, I’m not sure your conclusions can be anything other than speculation.


    1. Hi Michael

      The logic of my argument runs like this:

      1. There was a problem with KS2 high attainment and progress before the introduction of the new regime – and there still is afterwards.

      2. There’s no doubt that the reforms (and associated problems with how they have been implemented) have contributed to the poor outcomes in 2016.

      3. Although, other things being equal, one might reasonably expect the highest attainers to have been least affected by such turbulence.

      4. Given the long-standing nature of the problem – as well as the comparative insulation of the highest attainers – it seems highly unlikely that the reforms (and associated implementation issues) entirely explain its occurrence in 2016.

      5. It seems much more likely that other factors are partly to blame. (I make no attempt to quantify the impact of the reforms relative to these other factors, but I suggest what some of them might be.)

      6. It follows that we should consider carefully how best to tackle those factors, as well as sorting out the reforms and the problems associated with them, rather than assuming that the problem lies entirely with the latter.

      7. This will involve some honesty and humility, since many have a vested interest in suggesting that the fault lies entirely with the architects of reform – and is nothing to do with them.

      I can’t prove any of this, but it seems to me much more likely than the alternative explanation – so I’d suggest the onus ought to be on those who argue the reverse to prove their case.

      Good luck before the select committee.



      1. I think your first point is absolutely correct. I just don’t think the new data can tell us very much more. The “high scores” are nothing like the high attainers of Level 5 or 6, so we cannot draw comparisons. Had the ‘high score’ threshold been set at 105, the picture would have been very different. It’s just numbers at the moment.
        I don’t argue that there is no issue; merely that the 2016 data adds nothing to the existing case of any value, really.


      2. I agree that it’s difficult to make comparisons between the position under the old system and the new (though not impossible as the primary disadvantage gap index shows).

        I know NAHT’s position is that the 2016 data is itself flawed for league table purposes, but I don’t think that entirely invalidates analysis of performance at national level.

        The two pieces of national data I’ve highlighted – percentage achieving the higher standard headline measure and percentage of the very highest prior attainers achieving the higher standard/WAGD in each assessment – are alarming and deserve to be taken seriously. I don’t think you can dismiss them as ‘just numbers’, or take the line that the turbulence of reform is entirely to blame.

        I do think that they are admissible evidence of the continuation of a longstanding problem which has, if anything, been inflated by the reforms that were supposed to address the issue. There is a good chance the inflation will disappear once the reforms are embedded and assessment problems addressed, but the jury is out on the longer term impact of those reforms on the fundamental problem.

        This data provides a baseline from which to monitor progress over the coming years, assuming that the data continues to be published. I firmly believe the national transition matrices should be retained because of the light they shed on this issue.


    2. Your comment about writing moderation is absolutely true. I give the example of my own daughter and her school in the new system. My daughter’s rather strict teacher and school followed the guidance to the letter and the number of children reaching expected standard and greater depth in the writing assessment was well below national average. Yet in the SATs the children at her school performed much better than the average. Conversely, some other local schools achieved much higher teacher assessments for writing than they did for reading. Some local schools in which no pupils achieved high scores in the reading or maths SATs have twice as many children as my daughter’s school teacher-assessed as working at greater depth. Moderation was a farce.

      Now at an outstanding secondary school, my daughter, who was assessed to be expected standard in writing yet achieved a reading assessment of 116, has scored the highest marks in English assessments for the last four years by a significant margin (that’s 700 pupils). Many of her peers who went to other schools were assessed by their teachers at greater depth. My daughter is an August-born who achieved Level 3 across the board at KS1. The argument that she was failed by her school when she didn’t achieve greater depth at KS2 is ridiculous. In fact, her teacher and school seems to have prepared her rather well and followed the rules to the letter. It is clearly the lack of proper guidance and moderation that is at fault. Worse still, performance in the league tables looks awful in a way that is totally unfair. If I were a primary school head or an Ofsted inspector, I would completely disregard the writing assessment. I can’t believe this issue has not received greater publicity.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s