X Close

IOE Blog

Home

Expert opinion from IOE, UCL's Faculty of Education and Society

Menu

The predictive value of GCSEs and AS-levels: what works for university entrance?

By Blog Editor, IOE Digital, on 23 May 2013

Chris Husbands
Key Stage 4 and 5 qualifications are again at the centre of a controversy: which are most useful for fair university admissions – GCSEs or AS-levels? This matters because the DfE has announced that AS-levels are to become a standalone qualification, rather than the first half of pupils’ A-level results. The DfE argues that decoupling the AS in this way will put an end to time-consuming assessment in Year 12 that takes time away from teaching and learning. It is relaxed about the change, but some universities – most notably Cambridge University – beg to differ.
Cambridge has calculated that apart from the case of Mathematics, a pupil’s performance at AS-level provides a “sound to verging on excellent” indicator of Tripos (BA degree) potential across all its major subjects. STEP, an advanced Mathematics assessment, provided a better indicator in Maths. GCSEs, by contrast, were found to be less effective: around 10% of Cambridge entrants who apply with A-levels present very strong AS performance despite less impressive GCSE performance, and around three-quarters of this group are from state schools and colleges. On that basis, Cambridge claims that the loss of AS-levels will impact on student choice, flexibility and the opportunity for all pupils to apply to university with confidence.
The DfE did its own number crunching. It argued that GCSEs were accurate predictors of university outcomes in 69.5% of cases, and knowing both GCSE and AS results improved the accuracy of the prediction only slightly – to 70.1%. On this basis, it concluded that the added value of AS results for university admissions was very low. What should we make of this disparity, which was analysed in more detail by FullFact?
The two calculations are based on very different methodologies. Cambridge’s sums were based on just the students who were successful in its admissions process – a select group, whilst the DfE’s data drew on a much larger dataset – some 88,000 students. But there were also important differences in the granularity used by Cambridge and the DfE. As input data, the DfE used overall grades (A, B C, etc) secured in GCSE and AS examinations, whereas Cambridge used the much finer grained data of UMS scores on AS units. Universities routinely receive UMS scores, though few in practice make use of them. For outcome data, the DfE again used an overall score – looking at whether students in the global dataset secured a 2:1 or above in 2011, whereas Cambridge used the results on Part I Tripos examinations between 2006 and 2009. Moreover, the DfE used a single score across all subjects to see whether GCSE and AS results overall were good predictors in general, whereas Cambridge used a comparison of GCSE/AS and Tripos scores on a subject-by-subject basis.
This is complex stuff. Obviously, we all want policy to be informed by the most robust analysis possible; analysis that is as fine-grained as possible, makes full use of available records and takes account of important variables such as, in this instance, subject and institutional differences. But that is still a major challenge for policy and practice. What is also at stake is qualifications policy, which needs to serve stakeholders beyond the higher education sector.
Perhaps the elephant in the room is the continuing lack of real transparency regarding university admissions. As the debate between Cambridge and the DfE rumbled on, the Higher Education Policy Institute annual conference was hearing about just how in the dark schools feel when it comes to the admissions process – just which types of information do tutors take notice of and prioritise? Why such apparent differences across institutions? Tutors may use prior attainment at Key Stage 4 and/or 5; they may use the personal statement; they may use academic and other references; some will interview candidates and run other aptitude tests. But few universities state publicly the significance they attach to each source of information. If we had more robust data on the predictive value of different factors – at national level – that might help to pave the way for greater consistency and transparency in admissions, and help pupils in choosing which qualifications are right for them.

Print Friendly, PDF & Email

4 Responses to “The predictive value of GCSEs and AS-levels: what works for university entrance?”

  • 1
    behrfacts wrote on 23 May 2013:

    The main purpose of AS was to encourage more breadth in the academic system post16, the argument against is to do with too much assessment, the breadth being compensated for via improved non-academic routes (problem is we don’t really have them yet, as you note in your interim report to the Labour Party). The university outcomes prediction is, as you indicate, a bit of a red herring. I suspect 16-19 schools and colleges would like to have an additional measure to hand to improve the accuracy of their UCAS application processes, especially if more accountability pressures comes their way via HE destination targets.

  • 2
    Mark Thornber wrote on 23 May 2013:

    Cambridge have also made the case that AS is a better predictor for students who took GCSEs at under-performing 11-16 schools, surely a strong point in their favour given the desire on the part of government to improve access to highly regarded HE institutions. It’s worth noting that the DfE took no account of the relative performance of the schools involved, even though this is surely available from the National Pupil Database. Most of the paper is taken up with tedious detail about the corrections made for differing standards at universities, but the most important input factor was completely ignored.

  • 3
    Chris Husbands wrote on 24 May 2013:

    It’s fair to say that this blog focuses only on the predictive issues. It doesn’t address the question of what AS does for students (broader curriculum, allowing students to try subjects they may lack confidence in whilst offering an interim award) or schools (Mark’s point). The questions are complex and need a few more words than were on offer here!

  • 4
    Mike Griffiths wrote on 27 May 2013:

    And can we have data that show how many students, after a year of studying for 4 AS levels, change their mind over which to pursue to A level? This flexibility – to choose at 17 rather than 16 – will be lost; a shame, since in my experience it is common.