international practices in selective admission: uk experience · traditionally, university...
TRANSCRIPT
International
practices in
selective admission:
UK experience
Simon Beeston & Joanne Emery
Selective Admission in Dutch University
Education
VSNU, Amsterdam, 30th May, 2012
Overview of presentation
1) Background issues in the UK
2) Cambridge Assessment admissions tests
test constructs – what trying to do
stakeholder relations
overview of some of our tests
3) Research evidence
predictive validity
fairness - bias analyses, coaching
Background issues in the UK
Traditionally, university selection
in England has been based on ~
A-level (‘advanced level’) results:
National exams, taken at age 18, students take 3 or 4 subjects, two-
year courses (passes are grades A* to E).
May take AS-levels at end of first year
GCSE (‘General Certificate of Secondary Education’) results:
National exams, taken at age 16, students take around 8 to 10
subjects (passes are grades A* to G)
Personal statements, references and interviews
Background issue 1
Too many applicants with the highest grades
Increasing numbers of applicants with highest A-level grades for
competitive institutions and courses (e.g. medicine, vet medicine)
In England, candidates with grades AAA at A-level ~
- 1996 11,000 (8% of candidates)
- 2006 24,000 (15% of candidates)
- 2010 31,000 (17% of candidates)
~ tests have a differentiating function
Source: Emery (2007; 2011) Statistics Reports no.6 & no.36, Cambridge Assessment
Background issue 2:
Diversity of applicants’ qualifications
Increasing proportions of applicants without A-levels
- Diversity of qualifications within UK
- Overseas applicants
Diversity of subject combinations for the same course
Mature applicants
- are exam standards equal over time?
- decay of knowledge over time or changes to subject content
~ tests provide a yardstick (standard measure) for comparing applicants
Applicants from within the UK
Background issue 3:
Advantage of private schools, social groups
Nick Clegg attacks the rift between state and private schools' A-
level results
Private school pupils are more than three times as likely to earn crucial
grades, new report reveals (The Observer, Sunday 20 May 2012)
A-level results: gap may have widened between state and private
(Guardian, Thursday 18 August 2011)
Widening participation agenda, social mobility, access to the professions
Source: Emery (2011) Statistics Report no. 36, Cambridge Assessment
School types of high A-level attainers (2010)
Criteria:
comprehensive
(%)
selective
(%)
private
(%)
6th form college
(%)
FE college
(%)
total N
(count)
A* A* A* or better 25.5 20.8 39.4 11.7 2.5 4639
A* A* A or better 28.2 19.2 36.0 13.4 3.0 11180
A* A A or better 30.0 18.4 33.7 14.4 3.4 21232
A A A or better 30.5 18.2 32.8 14.7 3.7 30144
All candidates 44.9 11.3 15.7 20.1 7.8 180181
Importantly –
Research evidence that A-level & GCSE grades predict university
performance differently for private and state school students
e.g. Smith & Naylor (2001); Smith & Naylor (2005); Ogg, Zimdars &
Heath (2009)
Reasons?
- differences in motivation and behaviour once at university?
- short-term gains from teaching (lower underlying academic ability)?
~ again, need for a yardstick for comparing applicants
Background issue 4:
Only predicted exam grades are available
University applications and offers are made before A-level grades are
known
AS-level results (end of year 1)
- don’t have to take or declare
- possible move to linear system for all
Proposed move to applying after results are known (‘PQA’) has been
rejected
~ tests can provide concrete evidence of academic ability - early indicator
Why use an admissions test?
To differentiate between ‘equal’ applicants in a valid and transparent way
To provide a common measure for comparison of candidates:
different A level subjects different qualifications graduate, mature, overseas applicants applicants from different social and educational backgrounds
To assess generic skills needed to be successful in undergraduate study
To assess the ability to apply subject-specific skills/knowledge
To cut down applicants to more manageable numbers for interview IF pilot years support this
How are admissions test scores used?
Cautiously at first – pilot years
To supplement the information provided by academic attainment,
interviews, references, contextual data
Institutions apply formulae and weightings to provide the best
possible predictive validity
Sometimes as a hurdle to the interview stage - if cut scores are
applied, they are set well below the point at which applicants
would typically be successful
Cambridge Assessment admissions tests
Stakeholder relations
Test production and delivery processes
Construct DefinitionConstruct Definition
Produce a model of how students
represent knowledge and develop
competence in the subject domain
Item & Test ProductionItem & Test Production
Pretest items and undertake
analysis: Classical and IRT
Construct equated test forms
Train item writers, review and edit
submitted items, produce assets
and proof items
AdministrationAdministration
Provide marketing
communications and events.
Manage entries and results online
Despatch and login test materials
Train examiners, double mark
writing tasks. Monitor examiners
and 3rd
mark tasks. Aggregate
results for publishing. Manage
appeals
Continuing research and evaluation, revision and improvement of tests “reasoning from evidence”.
Cambridge Assessment: Test development, production and administration
Research & ValidationResearch & Validation
Design tasks or situations that
allow one to observe students’
performance
Pilot tests to confirm item and test
performance. Validate test scores
for their intended use.
Quality Assure every item in every test
Example: TSA - Critical Thinking
Example: TSA - Critical Thinking
46.7%
10.8%
8.0%
13.7%
16.0%
4.8% missing
Example: TSA – Problem Solving
Example: BMAT – Critical Thinking