X Close

IOE Blog

Home

Expert opinion from IOE, UCL's Faculty of Education and Society

Menu

The predictive value of GCSEs and AS-levels: what works for university entrance?

By Blog Editor, IOE Digital, on 23 May 2013

Chris Husbands
Key Stage 4 and 5 qualifications are again at the centre of a controversy: which are most useful for fair university admissions – GCSEs or AS-levels? This matters because the DfE has announced that AS-levels are to become a standalone qualification, rather than the first half of pupils’ A-level results. The DfE argues that decoupling the AS in this way will put an end to time-consuming assessment in Year 12 that takes time away from teaching and learning. It is relaxed about the change, but some universities – most notably Cambridge University – beg to differ.
Cambridge has calculated that apart from the case of Mathematics, a pupil’s performance at AS-level provides a “sound to verging on excellent” indicator of Tripos (BA degree) potential across all its major subjects. STEP, an advanced Mathematics assessment, provided a better indicator in Maths. GCSEs, by contrast, were found to be less effective: around 10% of Cambridge entrants who apply with A-levels present very strong AS performance despite less impressive GCSE performance, and around three-quarters of this group are from state schools and colleges. On that basis, Cambridge claims that the loss of AS-levels will impact on student choice, flexibility and the opportunity for all pupils to apply to university with confidence.
The DfE did its own number crunching. It argued that GCSEs were accurate predictors of university outcomes in 69.5% of cases, and knowing both GCSE and AS results improved the accuracy of the prediction only slightly – to 70.1%. On this basis, it concluded that the added value of AS results for university admissions was very low. What should we make of this disparity, which was analysed in more detail by FullFact?
The two calculations are based on very different methodologies. Cambridge’s sums were based on just the students who were successful in its admissions process – a select group, whilst the DfE’s data drew on a much larger dataset – some 88,000 students. But there were also important differences in the granularity used by Cambridge and the DfE. As input data, the DfE used overall grades (A, B C, etc) secured in GCSE and AS examinations, whereas Cambridge used the much finer grained data of UMS scores on AS units. Universities routinely receive UMS scores, though few in practice make use of them. For outcome data, the DfE again used an overall score – looking at whether students in the global dataset secured a 2:1 or above in 2011, whereas Cambridge used the results on Part I Tripos examinations between 2006 and 2009. Moreover, the DfE used a single score across all subjects to see whether GCSE and AS results overall were good predictors in general, whereas Cambridge used a comparison of GCSE/AS and Tripos scores on a subject-by-subject basis.
This is complex stuff. Obviously, we all want policy to be informed by the most robust analysis possible; analysis that is as fine-grained as possible, makes full use of available records and takes account of important variables such as, in this instance, subject and institutional differences. But that is still a major challenge for policy and practice. What is also at stake is qualifications policy, which needs to serve stakeholders beyond the higher education sector.
Perhaps the elephant in the room is the continuing lack of real transparency regarding university admissions. As the debate between Cambridge and the DfE rumbled on, the Higher Education Policy Institute annual conference was hearing about just how in the dark schools feel when it comes to the admissions process – just which types of information do tutors take notice of and prioritise? Why such apparent differences across institutions? Tutors may use prior attainment at Key Stage 4 and/or 5; they may use the personal statement; they may use academic and other references; some will interview candidates and run other aptitude tests. But few universities state publicly the significance they attach to each source of information. If we had more robust data on the predictive value of different factors – at national level – that might help to pave the way for greater consistency and transparency in admissions, and help pupils in choosing which qualifications are right for them.