related study 1 - the ability of a medical school admission process.21
TRANSCRIPT
-
7/30/2019 Related Study 1 - The Ability of a Medical School Admission Process.21
1/5
A C A D E M I C M E D I C I N E , V O L . 7 5 , N O . 7 / J U L Y 2 0 0 0 743
R E S E A R C H R E P O R T
The Ability of a Medical School Admission Process toPredict Clinical Performance and Patients Satisfaction
William T. Basco Jr., MD, Gregory E. Gilbert, MSPH, Alexander W. Chessman, MD, and Amy V. Blue, PhD
ABSTRACT
Purpose. The authors evaluated the ability of a two-stepadmission process to predict clinical performance and pa-tients satisfaction on a third-year objective structured
clinical examination (OSCE).Method. Subjects were three matriculating classes (1993,1994, 1995) at one medical school. Data for the classes
were analyzed separately. Independent variables were theAcademic Profile (AP), an initial ranking of applicantsbased on grade-point ratio and MCAT scores, and theSelection Profile (SeP), an average of three interviewscores. Interviews were offered based on AP rank, andadmission was offered based on SeP rank. Dependent var-iables were total score on the faculty-graded portion ofthe OSCE and patients satisfaction scores completed bythe OSCE standardized patients. The authors evaluatedthe correlations between AP and OSCE performance and
between SeP and OSCE performance. The authors alsocompared the OSCE performances of students whoseranks changed after interviews (SeP rank < AP rank or
SeP rank > AP rank). The level of significance was ad-justed for the number of comparisons (Bonferronimethod).
Results. Complete data were available for 91% of eli-gible students (n = 222). No class showed a significantcorrelation between either AP or SeP rankings and OSCEperformance (p > .01). Likewise, there was no differencein OSCE performance for students whose ranks changedafter the interview.Conclusions. The admission ranking and interviewprocess at this medical school did not predict clinical per-formance or patients satisfaction on this OSCE.Acad. Med. 2000;75:743747.
Medical schools seek to admit appli-cants who can complete the academicrequirements of medical school, canperform well as practicing physicians,and have the personal characteristics ofphysicians valued by members of our so-
Dr. Basco is assistant professor, Department of Pe-diatrics; Mr. Gilbert is a statistician and research
analyst, the Center for Clinical Evaluation andTeaching; Dr. Chessman is associate professor, De-partment of Family Medicine; and Dr. Blue is assis-tant dean for curriculum and evaluations, Office ofthe Dean, and assistant professor, Department ofFamily Medicine, all at the Medical Universityof South Carolina, Charleston.
Correspondence and requests for reprints should beaddressed to Dr. Basco, Department of Pediatrics,Medical University of South Carolina, 171 AshleyAvenue, Charleston, SC 29425; telephone: (843)792-2979; fax: (843) 792-2588; e-mail: [email protected] .
ciety.1 However, there is an ongoingneed for medical schools to evaluatetheir admission processes.2 Premedi-cal-school academic performance indi-cators, such as grade-point ratio (GPR)and Medical College Admission Test(MCAT) scores, correlate with standard
measures of academic success in medicalschool, such as scores on NationalBoard examinations.2,3 It has been more
difficult to predict performance in theclinical years.2
The Medical University of SouthCarolina admission process occurs intwo stages. Applicants are initiallyranked by academic criteria and laterranked by interview scores, where theyare evaluated for traits desirable in the
ideal physician.1,4,5 Our objective wasto determine whether these admission-
process rankings correlated with medi-cal students clinical performances asmeasured by faculty-assigned scores andpatients satisfaction scores on a third-year objective structured clinical ex-amination (OSCE) in family medicine.We hypothesized (1) that the pre-inter-
view admission ranking by academiccriteria would correlate poorly with fac-ulty-graded OSCE scores, but (2) that
the interview ranking would correlatepositively with patients satisfactionscores, as both measure students inter-personal and communication abilities.
METHOD
At the Medical University of SouthCarolina, an applicant is initiallyranked by a formula that incorporates
-
7/30/2019 Related Study 1 - The Ability of a Medical School Admission Process.21
2/5
744 A C A D E M I C M E D I C I N E , V O L . 7 5 , N O . 7 / J U L Y 2 0 0 0
the undergraduate GPR and MCAT
score to determine his or her Academic
Profile (AP).1 Applicants with AP
ranks above a predetermined cutoff
complete three one-on-one interviews.Interviewers complete a scoring sheet,
and the three interview scores are av-
eraged to determine each applicants
Selection Profile (SeP). While the va-
lidity of the admission interview has
been questioned elsewhere,6 our processtakes several steps to improve reliability
and validity. The three interview scores
are compared for inter-rater reliability.
If there is an aberrant score, two addi-
tional interviews are conducted and the
aberrant score is omitted. Our interviewscoring sheet asks interviewers to record
the applicants traits that an ideal phy-
sician should possess,4 which have been
abstracted from the 87 qualities of the
ideal physician developed and vali-
dated by Price and co-authors.5 Inter-viewers are not provided transcripts or
MCAT scores but may gain insight into
past academic performance from an ap-
plicants letters of recommendation.
Matriculants in 1993 and 1994 were of-
fered admission based on their SePs
alone. Matriculants in 1995 were of-fered admission based on a final ranking
of 20% AP and 80% SeP.
The subjects of our 1999 study were
the 1993, 1994, and 1995 matriculants
at the Medical University of South Car-olina. One of us (GEG) merged ad-
mission data for each applicant with
the family medicine OSCE database,
thereby eliminating applicants who
were interviewed but not offered admis-
sion, applicants who were offered ad-mission but did not accept, matriculants
who fell out of their matriculant co-horts, students who transferred to the
school, and matriculants in our MD
PhD programs. We excluded students
who did not take the OSCE with their
matriculation cohorts because their
comparison groups for the admission
rankings were different from their com-
parison groups for OSCE performance.
We excluded early-decision candidates
because their selection process did not
include MCAT scores, from which the
AP is derived. Finally, we eliminated
out-of-state residents because they are
subject to a slightly different admissionprocess. The subjects were then ranked
within their matriculation cohorts by
AP score and by SeP score. We identi-
fied students whose ranks decreased af-
ter their interviews (SeP rank < AP
rank).The OSCE was developed for the
junior core clerkship in family medicine
over the three years prior to the first
cohorts taking the test for this study.
For the 1993 matriculants, the test con-
sisted of five stations: four interviews ofstandardized patients and one physical-
examination skills assessment; for 1994
and 1995 matriculants, there were six
stations: four interviews of standardized
patients and two physical-examination
skills assessments. Faculty and standard-ized patients completed evaluations of
the students for each interview station,
and faculty completed ratings for the
physical examination stations.
The faculty-graded OSCE score was
the sum of individual station scores,
each on a five-point Likert-type scale (1= poor; 5 = excellent). For the physical-
examination stations the faculty mem-
bers completed a checklist recording
whether the student performed certain
elements of the physical examination.For the patient-interview stations fac-
ulty members also completed a checklist
of behaviors, derived from Cohen
Coles three-function model.7 The stan-
dardized patients completed a ten-item,
five-point Likert-type scale (1 = poor; 5= excellent) adapted from the Ameri-
can Board of Internal Medicines Pa-tient Satisfaction Questionnaire.8 All
third-year students take the family med-
icine OSCE, since family medicine is a
required rotation.
Data were analyzed using a standard
statistical analysis software package. De-
scriptive analyses were produced for
admission and OSCE variables. Pear-
son productmoment correlation coef-
ficients were calculated for the AP score
and the SeP score compared with thefaculty-graded OSCE scores and the pa-tients satisfaction scores. We compared
mean faculty-graded OSCE scores andpatients satisfaction scores (Students t-test) of those students who moveddown in ranking (e.g., from second inrank to 40th) with those of the studentswho moved up in rank after interviews.Finally, we completed stepwise linearregression of the AP score on fac-
ulty-graded OSCE score and patientssatisfaction score as well as SeP scoreon faculty-graded OSCE score andpatients satisfaction score. Control var-iables for these models were gender,
ethnicity (underrepresented minority ver-sus other), curriculum (problem-basedlearning versus traditional), and rota-tion number (112 to control for ex-perience gained during the third aca-
demic year).Analyses were conducted separately
for each class because the three cohortshad slightly (but nevertheless statisti-cally significantly) different mean APscores, mean SeP scores, and meanOSCE scores. In addition, the 1993 ma-triculants took an OSCE with fewer sta-tions. Our post-hoc power calculationsusing the sample sizes available foreach class revealed that stepwise
linear regression had a power of be-tween 0.62 and 0.66 to detect a corre-lation of 0.15 or greater. For all analy-ses, the level of significance wasadjusted for the number of comparisonscompleted (Bonferroni method). Ourinstitutional review board approved thisstudy.
RESULTS
The analysis cohort consisted of 222regular-admission, non MD PhD, in-state students matriculating in 1993 (n= 70), 1994 (n = 81), and 1995 (n =71). The ranges (with means and stan-dard deviations) for the three classesAP scores were: 1993, 2.414.6 (10.3
1.32); 1994, 8.613.5 (10.7 0.95);
-
7/30/2019 Related Study 1 - The Ability of a Medical School Admission Process.21
3/5
A C A D E M I C M E D I C I N E , V O L . 7 5 , N O . 7 / J U L Y 2 0 0 0 745
Table 1
Pearson ProductMoment Correlation Coefficients of Faculty-graded and Patients Satisfaction Scores of a Family Medicine OSCE with Academic
and Selection/Interview Profile Scores for 19931995 Matriculants to the Medical University of South Carolina*
Faculty-graded Score
1993
(n = 70)
r (p-value)
1994
(n = 81)
r (p-value)
1995
(n = 71)
r (p-value)
Patients Satisfaction Scores
1993
(n = 70)
r (p-value)
1994
(n = 81)
r (p-value)
1995
(n = 71)
r (p-value)
Academic Profile 0.25 (.03) 0.11 (.31) 0.01 (.97)
0.05 (.65) 0.00 (.99) 0.07 (.53)Selection Profile 0.18 (.14) 0.16 (.16) 0.27 (.02) 0.15 (.16) 0.14 (.20) 0.19 (.07)
*The study cohort excluded matriculants who did not take the OSCE with their classes, students who transferred to the school, and matriculants to MDPhD programs.
Test of H0: rho = 0.
Table 2
Mean of Faculty-graded and Patients Satisfaction Scores of a Family Medicine OSCE for Those Applicants Whose Admission Rankings Changed
after Interviews, 19931995 Matriculants to the Medical University of South Carolina*
Year Movement
Faculty-graded Scores
Mean SD n p-value
Patients Satisfaction Scores
Mean SD n p-value
1993 Upward
Downward
18.89
18.32
0.4
0.4
36
34.30
167.09
167.90
18.3
18.0
43
42.84
1994 Upward
Downward
19.30
18.92
0.3
0.4
43
38.47
157.59
159.51
13.1
10.9
43
39.47
1995 Upward
Downward
22.42
23.53
0.4
0.4
33
38.09
161.49
160.56
11.8
14.5
44
43.74
*The study cohort excluded matriculants who did not take the OSCE with their classes, students who transferred to the school, and matriculants to MDPhD programs.
Test of H0: Upward = Downward.
1995, 9.312.5 (10.5 0.67). Theranges for the SeP scores were: 1993,4.76.2 (5.6 0.3); 1994, 4.5 6.2 (5.7
0.4); 1995, 4.16.2 (5.7 0.4). The
ranges for the faculty-graded OSCEscores were: 1993, 1424 (18.6 2.3);1994, 1423 (19.1 2.4); 1995, 1628 (23.0 2.7). The ranges for theOSCE patients satisfaction scores were:1993, 121222 (166.3 19); 1994,124187 (158.5 12.1); 1995, 132189 (160.7 13.4).
Internal consistencies for the faculty-
graded OSCE scores ranged from 0.35to 0.50 for the three cohorts. For allthree classes, there was no correlationbetween either AP or SeP and faculty-
graded OSCE scores (Table 1; all p >.01). Likewise, there was no differencein faculty-graded OSCE scores for stu-
dents whose rankings moved downward
after interview (Table 2; all p > .05).Adjustment for restriction in range9 ofthe independent variables did not im-prove the correlation.
The internal consistency of the pa-tients satisfaction scoring was 0.86 to0.96. For all three classes, there was nosignificant correlation between the APand SeP scores and the total patients
satisfaction score (Table 1; all p > .05).Restriction in range adjustment againproduced no difference. Just as withfacultys scoring of the students, there
was no significant differences in themean patients satisfaction scores of
students who moved down in rankingafter the interview compared with
those who moved up (Table 2; all p >.10).
Multivariate equations failed to ex-plain significant variations among fac-ulty-graded OSCE scores and total pa-tients satisfaction scores for the classesentering in 1993 and 1994. The equa-tion for the class entering 1995 ex-plained 26% of the variation in faculty-
graded OSCE scores and total patientssatisfaction scores, but neither the APnor the SeP scores affected the varia-tions in these OSCE scores.
-
7/30/2019 Related Study 1 - The Ability of a Medical School Admission Process.21
4/5
746 A C A D E M I C M E D I C I N E , V O L . 7 5 , N O . 7 / J U L Y 2 0 0 0
DISCUSSION
The results of this study are consistent
with prior studies that illustrated how
difficult it is to predict an applicants fu-ture performance, particularly clinical
performance, during medical school.2,10
Our admission process did not allow us
to distinguish among the clinical per-
formances of our students on this family
medicine OSCE. Our effort adds to the
earlier studies in two ways: (1) by eval-
uating how traditional academic predic-tors (MCAT scores and GPR) relate to
clinical performance measured in a
standard setting, and (2) by evaluating
how application interview measures re-
late to clinical performance for the
same students. Refining the ability topredict performance during the clinical
years of medical school appears to be a
logical focus for future investigation,
given that previous studies have shown
that pre medical-school standardized
test scores correlate with performanceson standardized tests taken in medical
school.2,3 In a 1990 essay, Edwards and
colleagues noted that future efforts to
correlate interview variables with clin-
ical performance could be enhanced byemploying structured clinical encoun-ters that also measure communication
skills, such as an OSCE.11
There are several possible explana-
tions for the lack of correlation in our
study. The AP rankings, based on pre
medical-school academic achievement,may not be sensitive to the skills re-
quired to achieve a better score on
the OSCE. The skills on the faculty-
graded portion of the OSCE are very
dissimilar to the academic achievement
that the AP measures.However, we expected to find that
the SeP rankings, based on interview
scores, would have a high correlation
with OSCE performances, particularly
the patients satisfaction scores. While
it is certainly possible that the com-
munication skills employed in the in-
terview (where one is trying to impress
an interviewer) are different from those
required to perform well on the OSCE,
traits such as being an understanding
person and being truthful appear on
both instruments. Our interview scoring
instrument and patient satisfactionscore sheet have been validated exter-nally.4,5 Also, the relationship between
clinical competence and interpersonal
and communication skills has been re-
cently demonstrated by Colliver and
co-workers.12 Another possibility is thatthe interview process provides only an
estimate of whether certain desirable
traits are present in the applicant,4
whereas the patients satisfaction scores
provide a more objective evaluation of
whether the student exhibited thesetraits during a clinical encounter. Fi-
nally, the fact that interviewers may
have had insight into pre medical-
school academic performances through
letters of recommendation might have
diminished the difference in correlationwith later performance measures be-
tween AP and SeP rankings.
Other potential limitations of this
study rest with the OSCE. The stan-
dardized nature of the OSCE and the
relatively high internal consistency of
the patients satisfaction scores suggestthat it is an improvement over other
measures of clinical performance such
as faculty clerkship evaluations.13 How-
ever, OSCEs measure only subsets of
clinical performance skills. The moder-ate internal consistency of faculty grad-
ing on this OSCE is also a limitation.
The lack of correlation could also lie
in our exclusion of students who (for
both academic and non-academic rea-
sons) did not take the OSCE threeyears after matriculating. Our criteria
excluded students who fell behind ow-ing to academic difficulty as well as
those (MD PhD) involved in research.
Either groups performance on the
OSCE might have had a correlation
with the initial AP or SeP ranking, but
we can certainly say that there does not
appear to be any correlation with ex-
ceptional OSCE performance for those
remaining in the data set.
A final explanation could be the im-provement in the clinical skills of the
students. If our introduction to clinicalmedicine courses are successful, one
would expect such improvement. Dif-ferences in abilities among students thatmight have been present at admissionwould have been diminished by our ed-ucational efforts.
In summary, our admission-processranking system did not correlate with per-formance on either the faculty-graded or
the patients satisfaction portion of thisfamily medicine OSCE, which meansthat our admission process did not allowus to predict the performances of our stu-dents in this clinical setting.
The authors thank Ms. Carol Boyer, Director of
Student Information Systems, and Mr. Balder
Guerero, Information Resource Analyst, Office of
Enrollment Services, for their help in putting to-
gether the analysis file.
REFERENCES
1. McCurdy L. Assessing todays applicants.
Acad Med. 1997;72:10234.
2. Mitchell KJ. Traditional predictors of perfor-
mance in medical school. Acad Med. 1990;
65:14958.
3. Elam CL, Johnson MMS. Using preadmission
and medical school performances to predict
scores on the USMLE Step 2 examination.
Acad Med. 1994;69:852.
4. Sade RM, Stroud MR, Levine JH, Fleming
GA. Criteria for selection of future physi-
cians. Ann Surg. 1985;201:22530.
5. Price PB, Lewis EG, Loughmiller GC, Nelson
DE, Murray SL, Taylor CW. Attributes of a
good practicing physician. J Med Educ. 1971;
46:22937.
6. Taylor TC. The interview: one more life?
Acad Med. 1990;65:1778.
7. Cohen-Cole SA. The Medical Interview:
The Three-function Approach. St. Louis,
MO: MosbyYear Book, 1991.8. Tamblyn R, Benaroya S, Snell L, McLeod P,
Schnarch B, Abrahamowicz M. The feasibil-
ity and value of using patient satisfaction rat-
ings to evaluate internal medicine residents.
J Gen Intern Med. 1994;9:14652.
9. Cohen J, Cohen P. Applied Multiple Regres-
sion/Correlation Analysis for the Behavioral
Sciences. Hillsdale, NJ: Lawrence Erlbaum
Associates, 1975.
10. Tekian A. Cognitive factors, attrition rates,
and underrepresented minority students: the
-
7/30/2019 Related Study 1 - The Ability of a Medical School Admission Process.21
5/5
A C A D E M I C M E D I C I N E , V O L . 7 5 , N O . 7 / J U L Y 2 0 0 0 747
problem of predicting future performance.
Acad Med. 1998;73(10 suppl):S38S40.
11. Edwards JC, Johnson EK, Molidor JB. The
interview in the admission process. Acad
Med. 1990;65:16777.
12. Colliver JA, Swartz MH, Robbs RS, Cohen
DS. Relationship between clinical compe-
tence and interpersonal and communication
skills in standardized-patient assessment.
Acad Med. 1999;74:2714.
13. Campos-Outcalt D, Watkins A, Fulginiti J,
Kutob R, Gordon P. Correlations of family
medicine clerkship evaluations and structured
clinical examination scores and residency di-
rectors ratings. Fam Med. 1999;31:904.
7 5 Y E A R S A G O
Modern Educational Methods and Their Relation
to Medical Education
E. Stanley RyersonSecretary, Faculty of Medicine University of Toronto
Aknowledge of the main featuresof the teaching methods whichare being subjected to experiment inschools of the present day may be ofassistance to those interested in theproblem of teaching medical students.The method of allowing children of kin-dergarten age to educate themselves byplacing before them facilities from whichthey acquire ideas and facts of various
kinds was introduced first in Italy byMme. Montessori. The teacher occupiesa purely secondary place, answers ques-tions and acts as an assistant and guideto the child instead of a dictator and in-structor. . . . The chief feature of themethod consists of centering the atten-tion of the child and keeping the teacherin the secondary position.
. . . In order to stimulate the interest ofpupils in their studies, Mr. H. CaldwellCook has introduced a method known asThe Play Way. He endeavors to get thepupils to take an active part in learningby encouraging them to carry on debates,give little lectures and write topicalverses. The chief concern is the devel-opment of an attitude toward their les-sons similar to that toward their games.The work is not treated as frivolously, butwith due seriousness. So long as the pu-pils get enjoyment out of their work, nodifficulty is met in getting them to con-centrate upon it.
The operation of the underlying prin-ciple of this method is seen in medicaleducation in the change of attitude ofmany students when they begin seeingand diagnosing actual cases in the hos-pital, after leaving the practical studies inthe laboratory and dissecting room. Theintroduction of clinical teaching in theearlier years of the course is based to acertain extent on this principle.
The general recognition of the purpo-sive element is another of the moderntendencies in school method. The viewthat pupils should know why they learnthis or that in school has been verywidely accepted. The problems met withhave to do with real life instead of somevague hypothetical situation. Out of thisidea, there has developed what is calledthe Project Method. The basis of thisscheme consists in providing for the stu-dents the development of some problemfrom actual life and in enabling them toevolve the underlying principles. Insteadof studying individual subjects, learningtheir principles and being left to applyand correlate them at some later stage inlife, the pupils are brought in contactwith some actual experience in the ex-pectation that they will see the relationsand applications and draw therefrom con-clusions as to the principles involved. In-stead of systematically covering physics,chemistry and biology in a science course
in high school, instruction is given insuch projects as Inventionswind mill;water mill; their uses; the lift pump; le-vers; pulleys; simple machines in physics;or iron smelting of ores; cast iron andsteel in chemistry or pond life in the
fall; turtle; frog; the pond as a life societyor life group in biology.. . . On the one hand, the endeavor is
to make the student acquire a systematicand organized knowledge of each partic-ular subject by leading him logically fromthe elements of each to a complete con-ception of them all; but leaving their in-tegration, correlation and application tothe chance experience of each individualstudent.
On the other hand, there are thosewho consider it more important to attainan intelligent organized grasp of the
whole with a realization of the relationsand applications of its component parts.For the latter, the Project Method holdsout many advantages over the oldermore formal type. The solution probablylies between the two extremes. The ten-dency in the past has been unquestion-ably to pay too much attention to indi-vidual subjects and too little to theorganized knowledge of the whole.
Such a state exists in medical educa-tion, in which individual subjects aretaught to a large extent independently of
one another and with no purposive effortto produce a well-balanced co-ordinated,organized, practical conception of thewhole field of medical science. The intro-duction of the study of certain large sub-jects along the lines of the Project Methodwould assist in attaining this result.
Modern Educational Methods and Their Rela-tion to Medical Education. Bulletin of the Asso-ciation of American Medical Colleges. 1926;1:168.