evaluating student performance in the experiential setting...

12
Evaluating Student Performance in the Experiential Setting with Confidence Diane E. Beck School of Pharmacy, Auburn University AL 36849-5502 Larry E. Boh School of Pharmacy, University of Wisconsin, Pharmacy Practice Division, 425 North Charter Street, Madison WI53706 Patricia S. O’Sullivan College of Nursing, University of Arkansas for Medical Sciences, 4301 West Markham Street , Little Rock AR 72205-7199 This paper proposes that pharmacy schools should have a functional experiential evaluation system (EES). An EES consists of multiple assessment methods, guidelines for their optimal use, and procedures that promote faculty-student communication about performance. With respect to selection of assessment methods, pharmacy educators are encouraged to adopt the combination of observation-based ratings, Objective Structured Clinical Exams (OSCEs), and examinations consisting of Extended Matching Items (EMIs). Practical strategies an individual instructor may use to enhance the accuracy of observation-based ratings and examinations are described. With respect to OSCEs, the authors call for collaborative efforts among consortia schools in concert with the Center for the Advancement of Pharmaceutical Education (CAPE). At the school level, the entire faculty must be accountable for maintaining a quality EES that promotes faculty-student communication about performance. Only after implementation of a comprehensive quality EES can a school’s faculty feel confident in decisions about students’ preparedness to deliver pharmaceutical care. INTRODUCTION As pharmacy schools develop and expand their entry-level degree program’s experiential component, faculty confi- dence in decisions about student performance in the prac- tice setting has taken on renewed importance. To achieve this confidence, first the faculty must have data of appropri- ate quality and quantity from which to base an assessment. Second, the assessment methods must be free of biases that potentially threaten the process. Third, the assessment methods collectively must provide a level of accountability. These qualities ensure validity and reliability of the assess- ment process. Decisions made under these circumstances assure the pharmacy faculty and other stakeholders that the school only graduates those who can successfully provide pharmaceutical care. An earlier paper explained why instructors should use several methods for assessing student performance during experiential rotations and why observation-based perfor- mance ratings should be the primary component(1). It was cautioned that observation-based performance ratings of- ten suffer from rating inaccuracy and frequently are based on too few observations. Because rating inaccuracy has been attributed to the rater’s cognitive processing skills, Beck et al., described a model designed to make raters more aware of how they acquire, store, recall, and integrate information into ratings. Pharmacy educators were encouraged to pro- vide their experiential faculty 1 with a rater training program based on this model. However, because there are so many facets influencing performance assessment, rater training alone will not re- 1 A pharmacy school’s full-time, part-time, and volunteer faculty. solve all assessment problems. One way to address these multiple facets is to establish an experiential evaluation system (EES). An EES consists of a combination of assess- ment methods, guidelines for their optimal use, and the procedures established for faculty-student communication about student performance(2). The goal of this paper is to describe how to implement an EES that promotes faculty confidence about perfor- mance decisions. First, the reader will be provided recom- mendations for selecting assessment methods for the EES. Second, practical strategies for overcoming the limitations of observation-based ratings, simulations, and written ex- aminations will be delineated. Finally, suggestions for com- municating and handling issues related to performance assessment will be discussed. Many of the recommended strategies can be implemented by an experiential faculty 1 or individual instructor. However, others require either in- volvement of the entire school faculty or pharmacy leaders at the national level. SELECTING ASSESSMENT METHODS Consider Goals and Objectives. When selecting the most appropriate assessment methods, an experiential faculty must first consider the learning goals and objectives for the rotation(3). These goals and objectives should be explicit statements that communicate the activity and level of per- formance the student is expected to demonstrate. Since the primary aim of pharmacy practice experiences is to have students successfully provide patient care in the actual clinical setting, the goals and objectives should reflect this endpoint. The learning pyramid offered by Miller(4) em- 236 American Journal of Pharmaceutical Education Vol. 59, Fall 1995

Upload: votu

Post on 11-May-2018

216 views

Category:

Documents


1 download

TRANSCRIPT

Evaluating Student Performance in the Experiential Setting with Confidence Diane E. Beck School of Pharmacy, Auburn University AL 36849-5502

Larry E. Boh School of Pharmacy, University of Wisconsin, Pharmacy Practice Division, 425 North Charter Street, Madison WI53706

Patricia S. O’Sullivan College of Nursing, University of Arkansas for Medical Sciences, 4301 West Markham Street, Little Rock AR 72205-7199

This paper proposes that pharmacy schools should have a functional experiential evaluation system (EES). An EES consists of multiple assessment methods, guidelines for their optimal use, and procedures that promote faculty-student communication about performance. With respect to selection of assessment methods, pharmacy educators are encouraged to adopt the combination of observation-based ratings, Objective Structured Clinical Exams (OSCEs), and examinations consisting of Extended Matching Items (EMIs). Practical strategies an individual instructor may use to enhance the accuracy of observation-based ratings and examinations are described. With respect to OSCEs, the authors call for collaborative efforts among consortia schools in concert with the Center for the Advancement of Pharmaceutical Education (CAPE). At the school level, the entire faculty must be accountable for maintaining a quality EES that promotes faculty-student communication about performance. Only after implementation of a comprehensive quality EES can a school’s faculty feel confident in decisions about students’ preparedness to deliver pharmaceutical care.

INTRODUCTION As pharmacy schools develop and expand their entry-level degree program’s experiential component, faculty confi-dence in decisions about student performance in the prac-tice setting has taken on renewed importance. To achieve this confidence, first the faculty must have data of appropri-ate quality and quantity from which to base an assessment. Second, the assessment methods must be free of biases that potentially threaten the process. Third, the assessment methods collectively must provide a level of accountability. These qualities ensure validity and reliability of the assess-ment process. Decisions made under these circumstances assure the pharmacy faculty and other stakeholders that the school only graduates those who can successfully provide pharmaceutical care.

An earlier paper explained why instructors should use several methods for assessing student performance during experiential rotations and why observation-based perfor-mance ratings should be the primary component(1). It was cautioned that observation-based performance ratings of-ten suffer from rating inaccuracy and frequently are based on too few observations. Because rating inaccuracy has been attributed to the rater’s cognitive processing skills, Beck et al., described a model designed to make raters more aware of how they acquire, store, recall, and integrate information into ratings. Pharmacy educators were encouraged to pro-vide their experiential faculty1 with a rater training program based on this model.

However, because there are so many facets influencing performance assessment, rater training alone will not re- 1A pharmacy school’s full-time, part-time, and volunteer faculty.

solve all assessment problems. One way to address these multiple facets is to establish an experiential evaluation system (EES). An EES consists of a combination of assess-ment methods, guidelines for their optimal use, and the procedures established for faculty-student communication about student performance(2).

The goal of this paper is to describe how to implement an EES that promotes faculty confidence about perfor-mance decisions. First, the reader will be provided recom-mendations for selecting assessment methods for the EES. Second, practical strategies for overcoming the limitations of observation-based ratings, simulations, and written ex-aminations will be delineated. Finally, suggestions for com-municating and handling issues related to performance assessment will be discussed. Many of the recommended strategies can be implemented by an experiential faculty1 or individual instructor. However, others require either in-volvement of the entire school faculty or pharmacy leaders at the national level.

SELECTING ASSESSMENT METHODS Consider Goals and Objectives. When selecting the most

appropriate assessment methods, an experiential faculty must first consider the learning goals and objectives for the rotation(3). These goals and objectives should be explicit statements that communicate the activity and level of per-formance the student is expected to demonstrate. Since the primary aim of pharmacy practice experiences is to have students successfully provide patient care in the actual clinical setting, the goals and objectives should reflect this endpoint. The learning pyramid offered by Miller(4) em-

236 American Journal of Pharmaceutical Education Vol. 59, Fall 1995

Fig. 1. Framework for clinical assessment(4). Published with per-mission from Academic Medicine.

phasizes attainment of such goals requires achievement of the highest levels of learning (Figure 1). Miller’s model has a role in experiential education much like that which Bloom’s Taxonomy for Cognitive Learning Objectives has fulfilled in classroom learning(5). According to Miller’s model, there are four levels of ability the learner must achieve in order to provide patient care and they successively build on each other.

Miller posits at the lowest level, the practitioner must have a basic fund of knowledge or “know” information needed to accomplish the higher tiers of performance. Hav-ing a fund of knowledge however, does not assure that the individual can apply the information and solve problems. Therefore, the second tier indicates that the learner must “know how” to use the knowledge and reflects the usual definition of competence(4). However, competence does not assure that the learner can actually provide patient care. Hence, the third tier indicates the individual must be able to “show how” or perform the function. At the highest tier, the learner successfully provides patient care even when not in an artificial testing situation. Specifically, the learner “does” which means one accomplishes complex functions and in-corporates them into routine practice.

The provision of pharmaceutical care requires a fund of knowledge as a base, but the majority of the rotation goals should rest at the two highest levels. Therefore, the primary assessment methods adopted for an EES should infer whether the student “does” and the student can “show how.”

Desired Attributes. In addition to inferring performance in routine clinical practice (i.e., the student “does”), the ideal assessment method should manifest two other major at-tributes(6). First, the method should accurately separate the good student from the bad one. Second, because pharma-cists encounter an array of drug-related problems and issues in the clinical setting, the assessment method should accu-mulate enough data to predict how effectively the student solves patient problems of varying type and difficulty. Other characteristics of an ideal assessment method include low cost of development and administration, practicality, and ease of use(3).

Possible Assessment Methods. Table I provides a listing of assessment methods frequently used in experiential educa-tion and their strengths based on the attributes just delin-eated. Unfortunately, there is no single assessment method for which these characteristics are all strong(6). Therefore, the instructor must use a combination of assessment meth-ods whose strengths complement each other(7,8). A combi-nation of observation-based ratings, simulations, and writ-ten or computer-based examinations achieves this aim and therefore, should be the primary assessment methods of an EES(1). The instructors challenge is to minimize the limita-tions each exhibits.

Patient or case presentations are frequently used to assess experiential student performance, but they should not serve as a primary assessment method(1). During a patient presentation, the student presents a patient case to a group of peers and faculty and describes relevant litera-ture. Although patient or case presentations involve direct observation of the student, good interrater reliability can be achieved(9). However, the major limitations of this method are the inability to infer the student’s performance in the actual practice situation and management of a variety of patients with problems of varying type and difficulty.

Faculty members should also be cognizant that the patient presentation primarily assesses the student’s ability to communicate patient information and often contains information elicited by faculty and peers during prior dis-cussions about the patient(10,11). Also, it is possible for students to present a polished case study without having actively participated in providing patient care. If patient presentations are used as a secondary mechanism of assess-ment, instructors can minimize these limitations. For ex-ample, one could require that the content of the presenta-tion elicit what the student did to resolve an actual drug-related problem of the patient(9). In addition, the instru-ment used to evaluate this content should weight items assessing the student’s management of drug-related prob-lems more than those related to delivery of presentation.

Table I outlines other methods for evaluating clinical performance and their attributes concerning the character-istics of the ideal assessment method. Some of the methods, such as peer assessment, have not been tested in pharmacy education and primarily have a role in formative evaluation. Chart audits are advocated as an assessment method which infers how the practitioner performs in routine practice. However, until most pharmacy practitioners and students routinely document their recommendations in the patient’s chart this assessment method will have a limited role in evaluating student performance. The other methods cited in Table I do not have as many qualities as do observation-based ratings, simulations, and written or computer-based examinations. Therefore, the other methods listed in Table I should only supplement this combination.

Weighting of Assessment Methods. Since pharmacy student rotations are experiential, the assessment methods that best measure the student’s performance in providing routine patient care should be weighted the most when assigning the final course grade(1). This relative weighting can be visual-ized by simply inverting the triangle in Miller’s model (Fig-ure 2). As depicted in Figure 2, observation-based ratings should be weighted the most, followed by simulations. Writ-ten or computer-based examinations, which assess prob-lem-solving skills rather than factual knowledge may also be

American Journal of Pharmaceutical Education Vol. 59, Fall 1995 237

Fig. 2. 1 he relative weighting of the evaluation methods applicable to experiential learning.

used, but they should be weighted the least. Based on these weightings, we will now focus our discussion on these three primary assessment methods. For each method, a descrip-tion of its attributes and deficiencies will be highlighted. In addition, the reader will be provided with practical strate-gies for implementing and using each method.

OBSERVATION-BASED RATINGS Of the methods listed in Table I, observation-based ratings best infer how the student performs in the actual patient care setting(6). Therefore, they are best able to assess whether the student “does,” Although observation-based ratings have this strength and this method is widely used to evaluate pharmacy student performance(12), pharmacy educators have not elucidated ways to enhance their accuracy. The following strategies give special attention to ways an expe-riential instructor can strengthen rater accuracy and accu-mulate a sufficient number of observations.

First, experiential faculty members can enhance rater accuracy by participating in Rater Accuracy Training (RAT) workshops(13). A RAT session usually begins by discussing the following points about the rating instrument: (i) the dimensions of practice comprising the instrument; (ii) its scale anchors; and (iii) types of behavior indicative of these anchors. Those participating in the workshop then view a video vignette depicting the performance of one or more students. After this viewing, each RAT participant individu-ally rates the student’s performance. The trainer then fol-lows up with a discussion about the accuracy of the partici-pants’ ratings and the student’s effective and ineffective behaviors. Group discussions are an essential component of this training process and encourage program faculty to all use the instrument in the same way.

Although such training can enhance rater accuracy, RAT alone will not resolve all causes of rater error. To further reduce rating error, experiential instructors must improve their cognitive processing skills(14-16). Specifi-cally, experiential instructors must attend to how they acquire, encode, retrieve, and integrate performance infor-mation into a rating. Figure 3 describes these four processes and depicts how each builds on the others. For example, since acquisition of information provides the foundation of information that is recorded, stored, and eventually inte-

Fig. 3. The cognitive processing model and instructor strategies.

grated into performance ratings, the effects of biased obser-vations can be profound. This figure also summarizes prac-tical strategies the experiential instructor can implement to enhance his/her cognitive processing of performance infor-mation. Since these strategies can enhance the reliability of an important assessment method, we will now explain them in greater detail. Directly Observe the Student. Based on the Cognitive Pro-cessing Model described in Figure 3, experiential faculty members must make accurate observations and accumulate them in sufficient number and variety(1,6). Biased observa-tions can occur if the experiential instructor has precon-ceived impressions about the student. Therefore, experien-tial instructors must observe the student with special care if they have had earlier interactions with the student or talked with another instructor who has taught the student.

At the very least, experiential instructors must directly observe their student performing skills such as conducting medication histories, providing patient education, prepar-ing intravenous medications, and communicating with health professionals. This premise is drawn from findings of medi-cal educators who have identified the importance of directly observing medical students and residents as they perform physical examinations(17-22). They have concluded one should not solely rely on the student’s reported findings during rounds or patient presentations. Medical school fac-ulty members emphasize that without observation, errors of physical exam procedures may go undetected, lead to inac-curate assessments of student performance, and instill in-correct practice skills.

Data depicting the student managing a variety of pa-tients will give faculty greater confidence in decisions about experiential performance(6). Therefore, to accumulate enough data, the experiential instructor needs to make a sufficient number of observations. Experiential instructors frequently ask, “What is a sufficient number of observa-tions?” Unfortunately, a single quantifiable number does, not exist. The number of recorded observations should be sufficient such that upon review by another faculty member,

238 American Journal of Pharmaceutical Education Vol. 59, Fall 1995

Fig. 4. Example of an anecdotal record.

this individual would reach the same conclusion. If the setting provides limited opportunities to. observe certain practice dimensions2, the options are to either supplement the site’s learning activities or reconsider the site as one for experiential training.

Frequent but unobtrusive observations by the instruc-tor minimize student awareness of being assessed and there-fore, will better reflect how the student performs in routine practice. Experts in observational research indicate that the person being observed may initially perform differently than they routinely do. but after this initial awareness they revert to their usual performance(23). Therefore, the expe-riential instructor should observe the student several times daily. The experiential instructor is more likely to be unob-trusive if one observes the student from a distance and does not maintain vision on only the student.

Record Observations. If experiential instructors store their observations in memory, they are likely to exhibit poor recall or less organization of information at the time of assessment or both(1). When the student asks why the instructor assigned a certain rating, the instructor has diffi-culty providing concrete evidence. For example, an instruc-tor may tell the student. “I gave you a rating of marginal because you communicate poorly with others,” When the student asks what is meant by “poor” the instructor is not able to recall specific incidents and has difficulty explaining this rating. This issue can quickly escalate into a situation where the student challenges a grade and the instructor lacks documentation to support the decision.

The experiential instructor may minimize this by re- 2The independent qualities of the overall performance. For example, several dimensions of competence in pharmacy practice are communica-tion skills, patient monitoring skills, and professionalism.

Fig. 5. Ratings should be assigned when the student s performance has reached steady-state.

cording observations of the student(1). In the clinical set-ting, anecdotal records serve as one way to record observa-tions of student behavior(24-26). The experiential instruc-tor makes an anecdotal record by observing a single event and writing a description in specific terms. One should document facts and not opinions about the observed behav-iors. Brevity and clarity are more important than grammati-cally correct sentences. Therefore, the instructor may use abbreviations, codes, and symbols to facilitate the recording step. The instructor should record the event as soon as possible after the observation to attain an accurate descrip-tion. To enhance data retrieval, the instructor should in-clude on each anecdotal record, the student’s name, and the task or dimension it pertains to. If the instructor wants to make an interpretation or assessment during the recording step, the instructor should document this separately from the described behavior (i.e., on the back of the anecdotal record). Figure 4 provides an example of an anecdotal record.

Rater compliance is the major factor limiting the suc-cess of anecdotal records. Instructors should individualize the recording technique to meet their particular teaching and practice situations. For example, some instructors may prefer recording their observations on a pocket dictaphone. During an inpatient clerkship, the instructor may make notations directly on a card or monitoring form used to track patient-student assignments.

Retrieving Performance Data. At the end of the assessment period, the experiential instructor should use a structured approach when retrieving the recorded observations(1). Anecdotal records should be sorted by dimension so that the instructor can rate each student on the first dimension before rating the next dimension. If an anecdotal record depicts several dimensions of providing patient care, it should be reviewed again when rating each applicable di-mension.

Integrate Data into Scores. At the time of assigning ratings, the instructor must integrate the accumulated data into scores1. When retrieving and reviewing the recorded observations, the experiential instructor should be mindful that, when under stress one is more likely to depend on first impressions and cannot accurately differentiate performance of the compo-nent dimensions being evaluated(27). Therefore, the assign-ment of ratings should be done at a predetermined time when one does not have other competing demands.

Experiential instructors frequently encounter uneven American Journal of Pharmaceutical Education Vol. 59, Fall 1995 239

student performances during a rotation. How the instructor addresses this variation when assigning ratings is a potential source of error. Reliability and generalizability theories assume that when instructors assign ratings, the student’s performance is at “steady-state”(28-29)(Figure 5). Given this premise, an experiential instructor may want to estab-lish a priori, the number of days students typically need to become acclimated to the practice setting. Following this learning period, the instructor should weight observations equally since the student’s performance should be at a plateau.

Designating the steady-state period, during which the experiential instructor will collect performance data, also separates the time of learning from assessment(30). This is important since students usually face a learning curve at the beginning of a rotation. During this learning phase, the student should feel comfortable in seeking feedback from the instructor about how to improve and perform various professional skills. They should not have concern about being penalized for what they do not know. Although this concept is sound, it can compromise the instructor’s oppor-tunity to make a sufficient number of observations during the steady-state phase if the rotation is too short in duration.

Frequently, instructors struggle with how to rate a student who performed “poorly” during the first portion of the “steady-state period” but improved significantly during the remainder of the rotation. To resolve this dilemma, the instrument could include a dimension that assesses the student’s ability to perform consistently. This is an appropri-ate dimension to evaluate since a competent practitioner must exhibit consistent performance. By having this as a separate dimension, the instructor is able to evaluate both the student’s current level of performance and consistency.

Negative incidents, which occur either at the beginning or the end of the steady-state period, may bias the selection of a score more than if these events took place between these two times(31-32). The instructor can minimize such a prob-lem by rating the student’s performance several times dur-ing the steady-state period and then using these intermedi-ate ratings to guide the final assessment.

Other Raters. Some instructors are just “better raters.” These individuals often have more experience in rating students. Research has found that these raters are superior in attending to the dimensions of multidimensional behav-iors(33). In rating performance, these individuals can also more accurately differentiate among a group of students(34). Therefore, when an experiential instructor is perplexed about the most appropriate score, one may wish to seek the assistance of a colleague who has such abilities. Littlefield(35) has advocated a similar concept by involving “gold-standard raters” when evaluating the marginal student. In medical and dental education, academicians have proposed that other health professionals and patients who interact with the student may more appropriately observe and evaluate humanistic and professional-patient relationship skills(36-38). Physicians and nurses can provide valuable insights about a pharmacy student’s performance in these areas. However, experiential instructors should include ratings assigned by patients with caution since they often over rate a student’s performance(39-40).

These strategies should assist the experiential instruc-tor in making more accurate ratings. However, the instruc-tor must be willing to assign the most appropriate scores(41-

43). This conflict between the rating deserved and what the rater is willing to assign often becomes evident when the instructor must convert the ratings into a grade. For ex-ample, an instructor may conclude that a student deserves a grade of “A” based on the overall performance. However, when a mean percentage score is computed using the rating instrument the result is 80 percent or “B” level work (i.e., average rating of 4 using a 5 point rating scale).

To minimize this, the EES should incorporate mecha-nisms that separate the process of assessment and assign-ment of a grade(42-43). For example, the experiential in-structor could assess student performance using the assess-ment instrument but not assign a grade. The experiential program director would then undertake the responsibility of converting the ratings into a grade. The grade could be determined using a conversion table, established by faculty a priori, which interpolates the overall mean score to a grade (i.e., a mean rating of 4 on a 5 point scale corresponds to a grade of “A”). The experiential director would then present the grade to the instructor for final approval.

SIMULATIONS The desire for an assessment method that not only predicts performance in a variety of clinical situations but is reliable has led medical and nursing educators to use simulations (44-57). Those described in the literature range from Objective Structured Clinical Examinations (OSCE’s) to computer-based simulations.

An OSCE involves subjecting students to multiple stan-dardized patient encounters as they rotate through a series of work stations(46). Standardized patients are individuals trained to serve as both a patient and evaluator(44-45), (52-54). They may or may not be actual patients.

Although OSCEs have proven reliable and can test a variety of problems, there are little data inferring they truly predict how the student will perform in routine practice(6,44). OSCEs conducted in a laboratory setting provide an artifi-cial environment whereby students only have to perform well for a brief period. To compare medical resident performance in routine practice with that achieved during an OSCE, researchers have sent undetected standardized patients to residents’ clinics and compared the management of such patients to the resident’s performance during an OSCE(49,58). Tamblyn et al. found the OCSE data pre-dicted the medical residents’ performance with the stan-dardized clinic patients; however, they were not accurate for those residents performing in the lowest quartile(49,58). Several pharmacy educators have described use of OSCE’s to assess experiential student performance(59-61). The edu-cators reporting these experiences, have provided limited psychometric data inferring validity as cited by Kane(6). Guidry et al.(61) reported the development, reliability, and validity of an OSCE designed to evaluate students complet-ing a pharmacy externship program. Although good interrater reliability was achieved with most of the work stations requiring direct-observation by the instructor, the one involving patient communication skills did not. This OSCE correlated better with NAPBLEX scores than did written examinations and preceptor ratings. However, we do not know the validity and reliability of the written examinations and preceptor ratings.

The implementation of an OSCE is a monumental task and cost is the primary factor limiting its widespread use(62,63). Effective OSCEs require time of not only health

240 American Journal of Pharmaceutical Education Vol. 59, Fall 1995

professionals, but standardized patients, and psychometri-cians. Given the extensive resources needed to implement valid and reliable OSCEs. we encourage a pharmacy school to make this assessment method an effort of the entire faculty and collaborate with regional schools(64). At the national level, pharmacy leaders are encouraged to develop consortia programs composed of nearby pharmacy schools. A consortia of pharmacy schools can share personnel, as-sessment instruments, and other supporting materials(64). Medical and nursing educators have described use of computerized simulations to assess whether the student can demonstrate successful management of a case and problem-solving skills (i.e., “shows how” and “knows how”). Al-though pharmacy schools have described use of computer-ized simulations, they have not widely used such methods in the actual assessment of student performance during expe-riential rotations(65-70). Since these often require fewer resources than an OSCE, they hold a promising future in the assessment of pharmacy student performance. However, computerized simulations are less able to assess perfor-mance in areas such as interpersonal communication skills unless expensive interactive programs are developed(69).

WRITTEN/COMPUTER-BASED EXAMINATIONS Written examinations have been frequently described in pharmacy education literature as a means of evaluating experiential student performance(10,71,72). Although this method is usually reliable and can evaluate the student’s knowledge about a broad sample of clinical problems, it is only an indirect measure of how the student would perform when providing routine patient care. This assessment method should be weighted the least since written examinations also reward reading and studying for the tests over active partici-pation in the patient care unit(73).

To assess whether the student “knows how,” the written test for an experiential rotation should test problem-solving and clinical judgment skills rather than just recall of factual information(74-77). This is more likely to be achieved by using Extended-Matching Items (EMIs) as an alternative to traditional test questions(77-81). In contrast to typical multiple choice questions, with EMIs there are 6-25 pos-sible answers and any number of them may be correct. The National Board of Medical Examiners is currently using this format and believes it is: (i) easier to score than free-response questions; (ii) more frequently tests problem-solving abilities; and (iii) decreases the likelihood of the student guessing the correct answer(79). These examiners also advocate computer-based tests consisting of EMIs and patient simulations to better assess whether the student can “show how” and “knows how”.

OTHER EES COMPONENTS Self-assessment procedures are examples of an assessment method that is valuable but, because of the limitations noted in Table I, should only supplement the primary methods. During a rotation, the student and instructor must work together so that deficiencies can be identified and corrected during the learning period. Student self-assessments can serve as catalysts for such discussion and are therefore, particularly valuable in formative evaluations. Self-assess-ment also promotes development of lifelong learning skills(82-91). Although self-assessment procedures are valu-able components of the EES. pharmacy educators should use these data carefully since medical students’ self-assess-

ments have correlated weakly with expert judg-ments(84,87,91).

As one means of self-assessment, students can docu-ment their contributions to patient care and assess the quality and significance of their interventions(92-95). Such information provides the instructor and student with data that may be used to address whether the student is being exposed to good learning experiences and the extent to which the student is actively involved in patient care. This information also provides evidence to pharmacy educators and other stakeholders that a student has received an appro-priate learning experience. In addition, these data empha-size to the student the outcomes expected of a practitioner.

The student’s interventions would be valuable data to include in a portfolio. Portfolios have recently received recognition by educators as a mechanism for promoting reflective thinking and self-assessment and documenting learning achievements(96-99). Assembly of a portfolio re-quires the student to select examples of their work and reflect on what learning these examples represent. Since these skills are essential for lifelong learning, we believe they should be assessed during pharmacy practice experi-ences. Educators who incorporate portfolios as a compo-nent of their EES should provide students with structured guidelines about how to assemble one. These guidelines ensure the assembled materials provide insightful docu-mentation for both the student and the program. However, further research is needed to establish the psychometric properties of portfolios and how to assign a grade for this work.

In addition to assessment methods and guidelines for their optimal use, an EES should provide mechanisms for documenting and communicating student performance. These mechanisms can minimize any consequences that may result from performance decisions. The following stu-dent evaluation issues particularly require effective docu-mentation and communication procedures.

STUDENT EVALUATION ISSUES Nothing illuminates the quality of a school’s EES more than the “problem student”(100-105). Performance deficiencies may present as either cognitive or noncognitive problems. Noncognitive problems such as poor interpersonal skills and non-assertive/shy behavior have been noted by medical educators to be the most difficult to act upon(2,100).

Faculty members are sometimes reluctant to make nega-tive decisions about a student’s rotation performance due to fear of legal consequences(38,106-111). However, instructor decisions about student performance have been upheld by the courts as long as the student has received due process. Based on an assessment of court cases, this is true for both tenure-track and affiliate faculty members(111).

To assure due process, the experiential instructor must inform the student about rotation expectations at the begin-ning of the experience and communicate performance defi-ciencies early in the rotation(2,38.39). Due process also requires that the school’s EES includes a mechanism by which students can appeal the faculty member’s decision.

Table II outlines the responsibilities of various faculty members in communicating performance assessments with students. For example, faculty members should provide students with early warning about any deficiencies(2). An important component of this process is to provide the stu-dent with feedback about their performance. Therefore.

American Journal of Pharmaceutical Education Vol. 59, Fall 1995 241

Table I. Evaluation methods used to evaluation clinical performance and the level of ability they most effectively measure

Levels of ability based on Miller’s Model(3)

Method Knows Knows how

Shows how Does

Good interrater reliability

Good generalizability

Low cost

Feasability/ ease of use

Written Exams, MCQs, EMIs, 75-80

+ ± 0 0 + + + +

Oral Exams 11, 126, 132

+ + 0 0 ± + + +

Triplejump 133

+ ± 0 ± 0 + ±

Chart/ Written simulations/ latent image Developer® A.B.Dick Co. 125, 128, 129

+ + ± 0 ± + + +

Computer based Exam with EMIs 75,134

+ ± 0 + + ± +

Clinical Evaluation Exercise 135, 136

+ 0 0 0 + +

Videotape Reviews 137, 138

+ 0 ± ± + ±

Standardized Patients 44, 65

+ ? + + 0 0

Observations 71, 72, 121, 127, 138-149

+ ± ± ± 0

Patient Conference/ Presentation 9, 10, 72, 150

+ 0 0 ± 0 + +

Consultation letters/medi- cal history write-ups 151, 152

+ ± ± + ±

Chart audits 153, 154

+ ± ± + ±

Peer Assessments 36-40, 155

+ ± ± + ±

Self- assessment/ portfolios 82-92, 97, 156, 157

± ± ± + ±

0 = Weak; ± = Sometimes Weak; + = Strong a Predicts how effective the student solves patient problems of varying type and difficulty.

242 American Journal of Pharmaceutical Education Vol. 59, Fall 1995

Table II. Recommendations for achieving a functional EES

School Faculty Responsibilities * Establish the school’s EES (Figure 6)

• Establish overall goals and objectives for rotations. • Implement valid evaluation methods and adopt them

across all program rotations. • Conduct rater training programs that emphasize

accuracy (RAT) and effective use of cognitive processing skills.

• Provide feedback to faculty about their rating skills. • Appoint the experiential program director as EES

manager. • Charge the “Performance Committee” with the

responsibility of acting upon evaluations of marginal/ deficient students.

* Establish policies for: • Identifying and managing students with poor perfor-

mance • Due process • Dismissal of failing students.

Experiential Program Director Responsibilities • Ensure that experiential faculty have completed the

school’s rater training program. • Communicate evaluation policies and procedures to

faculty and students. • Ensure that performance evaluations are completed at

the appropriate times and in a timely manner. • Coordinate determination of the final grade based on

the instructor’s observation-based ratings and other assessment results.

• Monitor the quality of the EES and communicate assessments to the school faculty.

• Develop a course/experiential program portfolio for accountability of program learning(96).

• Arrange for/serve as a “gold-standard” rater. Individual Experiential Instructor Responsibilities

• Communicate rotation objectives, responsibilities, and evaluation methods both verbally and in writing at the beginning of the rotation.

• Plan time for directly observing the student in the practice setting.

• Provide feedback about performance throughout the rotation.

• If the student exhibits deficiencies, communicate this to the student before the midpoint of the rotation.

• Involve the student in making self-assessments during the rotation, discussing these assessments and comparing them to instructor assessments.

• Document communications to the student about deficiencies. Include their self-assessments in this documentation.

• Involve a peer faculty member as a “Gold-Standard Rater” when a student has questionable performance.

Table III outlines recommendations for providing feedback effectively(112-118).

Faculty members often debate whether they should share past performance assessments with those who teach the student in current or upcoming rotations. Supporters suggest that sharing of this information promotes the eluci-dation of strengths and weaknesses early in the rotation. However, opponents point out that such information may bias the instructor’s assessment of student performance and breach student confidentiality.

Medical educators have recommended that faculty share only problems related to mastery of program competencies with subsequent faculty(119). Furthermore, faculty should

Table III. Strategies for providing effective feedback General Recommendations on Providing Feedback

• Feedback should be provided by the preceptor directly involved in supervising and rating the student.

• Feedback should be given when alone with the student and in a setting conducive to discussing the perfor-mance.

Recommended Procedures for Conducting a Feedback Session with the Student

1. Feedback should be undertaken with the preceptor and student working as allies-with common goals: • Begin the session by establishing a mutual goal of

assessing the student’s performance and identifying ways to enhance the desired skills.

2. Then, seek the student’s input about their perceived performance.

3. Start the session by emphasizing positive attributes; Then, discuss negative attributes. • Alternatively, sandwich negative feedback between

positive feedback. 4. Provide specific examples of incidents (anecdotal records)

that lead you to assign a given score. 5. Feedback should be:

• Based on first-hand data • Regulated in quantity • Limited to behaviors that are remediable • Phrased in descriptive non-evaiuable language • Focused on the performance ... not the performer (i.e.,

student) • Sensitive to the student’s self-esteem

6. Encourage the student to respond and discuss his/her feelings about your assessment.

7. In collaboration, develop a “prescription” that will help the student perform appropriately.

8. Feedback should include examples of correct perfor-mance/how the skills may be correctly performed.

9. End the session by jointly agreeing when another feed-back session will be held. The major goal of this session will be to assess whether the prescription has been effective.

encourage the student to communicate this information. The student also must have the right to appeal the process. Medical educators further recommend that the student’s advisor should coordinate communicating this information. To minimize bias due to knowledge of previous assessment data, we recommend dissemination of this information after the new instructor has had time to observe the student and form one’s own initial judgments. The faculty should also encourage the student to share one’s self-assessment docu-ments with the rotation instructor.

Another rating issue concerns individual faculty mem-bers who resist using the school’s standardized assessment instrument. Although such a faculty member may have “their own” assessment instruments and desire to use them, this presents significant problems. First, although the method may seem very lucid to an individual instructor, it may not be similarly interpreted by other program faculty members. Furthermore, when a variety of rating forms are used during the rotation sequence, it is difficult to track students on a longitudinal basis, assess growth across rotations, and evalu-ate achievement of specific educational outcomes. Use of a standardized instrument across rotations provides data for longitudinally tracking student performance. In some spe-cialized rotations, such as a drug information clerkship, unique assessment methods are needed. However, if the school’s standardized instrument is well designed, some of

American Journal of Pharmaceutical Education Vol. 59, Fall 1995 243

Fig. 6. An example of an experiential evaluation system(2). Adapted with permission from Academic Medicine.

its items should have applicability to such a rotation. If some items are not applicable to a given rotation, the instructor should have an option to indicate such when completing the instrument.

DISCUSSION Pharmaceutical care requires the pharmacist to perform complex functions that involve the integration of knowl-edge, skills, and clinical judgment(120). As evidenced by this review of literature, it is not an easy task to assess whether the pharmacy student “shows how” and “does” pharmaceutical care. Furthermore, the methods most ap-propriate for measuring these levels (i.e., observation-based ratings and simulations) are costly in terms of faculty time and financial resources. All assessment methods have limitations. Therefore, a combination of observation-based ratings, simulations, and written exams or computer-based tests with EMIs are recommended.

To date, most experiential assessment methods de-scribed by pharmacy educators either lack psychometric data inferring accuracy or assess lower levels of learning s u ch a s w h a t a s t u d e n t “ k n o ws ” o r “k n o w s how” (69,71,72,121-127). Pharmacy educators have described use of written simulations or OSCEs but few schools have adopted these assessment techniques as part of their EES(10,70,128-130). A major factor contributing to this lack of data is limited resources at individual schools.

This paper has identified how experiential instructors can select the best combination of assessment methods and enhance rating accuracy. However, an effective EES, will require collaboration of schools at the national level, and cooperation of faculty members, at the school level. Phar-macy leaders at the national level have accomplished an important step toward improving student/practitioner as-sessment by establishing the Center for the Advancement of Pharmaceutical Education (CAPE)(130). CAPE is encour-aged to promote consortia programs since regional sharing of resources, such as training programs, will minimize any logistical problems. Based on the experience of medical educators, sharing of resources is necessary in order to develop psychometrically sound assessment methods. This strategy will also promote greater consistency in perfor-

mance outcomes among our pharmacy graduates. Significant strides are needed at the school level to

ensure accountability. A school’s faculty, as a whole, should be responsible for the effectiveness of their EES. Table II and Figure 6 delineate a model EES(2,131).

The individual experiential instructor is accountable for rating with accuracy and making a sufficient number of observations. However, several actions should be taken by the collective faculty at a school to make this responsibility possible. For example, rater training should be provided and required of all faculty members (both full-time and affiliate) who teach experiential rotations.

To promote a sufficient number of observations, the school faculty must also evaluate whether the length of rotations is of appropriate duration. This paper has estab-lished the importance of making observations during the “steady-state” period of a student’s rotation and accumulat-ing a sufficient number of them. Increasing this window of time should allow more opportunities for making “steady-state” observations. Hunt et al.(2), has cautioned medical educators that rotations, which are four weeks or less in duration, may compromise the effectiveness of their school’s evaluation system and the students’ learning.

There must be a “gate-keeper” who monitors the effec-tiveness and identifies weaknesses of the school’s EES. The authors recommend that the faculty should appoint this responsibility to the school’s experiential program director. The experiential director is positioned to promote early recognition of performance problems and ensure due pro-cess. This individual should be accountable for the tracking of student information as the learner progresses through the rotation sequence and bringing problem-students to the attention of program faculty and/or a committee established to address performance deficiencies. Such information could be communicated to faculty in an “Annual Report of Expe-riential Evaluation.”

CONCLUSIONS Pharmacy educators can no longer ignore the need for assessment methods that assess whether the student “shows how” and “does” pharmaceutical care. Observation-based ratings most appropriately assess whether the student “does” and simulations can assess whether the student “shows how.” Experiential instructors must enhance their observa-tion-based rating skills to accurately evaluate whether the student “does.” At the national level, pharmacy educators must collaborate and pool resources to develop psycho-metrically sound observation-based assessment methods and simulations, such as OSCEs. Finally, at the school level, a faculty must recognize its role and assume responsibility for assuring that the EES is effective. Only then will phar-macy educators feel confident about their graduates’ per-formance in the practice setting.

Am. J. Pharm. Educ., 59, 236-247(1995); received 4/18/94, accepted 6/8/95.

References (1) Beck, D.E., O’Sullivan, P.S. and Boh, L.E.,” Increasing the accuracy

of observer ratings by enhancing cognitive processing skills,” Am. J. Pharm. Educ., 59, 228-235(1995).

(2) Hunt, D.D., “Functional and dysfunctional characteristics of the prevailing model of clinical evaluation systems in North American medical schools,” Acad. Med., 67, 254-259(1992).

(3) Berner, E.S. and Bender, K.J., “Determining how to begin,” In Evaluating Clinical Competence in the Health Professions, (edit. Morgan, M.K., and Irby, D.M.) C.V. Mosby Company, St. Louis

244 American Journal of Pharmaceutical Education Vol. 59, Fall 1995

MO (1978) pp. 3-10. (4) Miller, G.E., “The assessment of clinical skills/competence/perfor-

mance,” Acad. Med., 9(Suppl), S63-S67(1990). (5) Bloom, B.S., Taxonomy of Educational Objectives. Handbook I:

Cognitive Domain, David McKay Co., Inc., New York NY (1956). (6) Kane, M.T., “The assessment of professional competence,” Eval.

Health Prof., 15, 163-181(1992). (7) Risucci, D.A. and Tortolani, A.J., “A methodological framework

for the design of research on the evaluation of residents,” Acad. Med., 65, 36-41(1990).

(8) Frye, A.W., Hoban, J.D. and Richards, B.F.,” Capturing the com-plexity of clinical learning environments with multiple qualitative methods,” Eval. Health Prof., 16, 44-60(1993).

(9) Beck, D.E. and Clayton, A.G., “Validity and reliability of an instru-ment that evaluates PharmD student performance during a patient presentation,” Am. J. Pharm. Educ., 54, 268-274(1990).

(10) Engel, G.L., “The deficiencies of the case presentation as a method of clinical teaching,” N. Engl. J. Med., 284, 20-24(1971).

(11) Rowland, P. A., Burchard, K.W., Grab, J.L. and Coe, N.P., “Influence of effective communication by surgery students on their oral exami-nation scores,” Acad. Med., 66, 169-171(1991).

(12) Boh, L.E., Pitterle, M.E., Schneider, F. and Collins, C.L., “Survey of experiential programs: Course competencies, student performance and preceptor/site characteristics,” Am. J. Pharm. Educ., 55, 105—113(1991).

(13) Pulakos, E.D., “A comparison of rater training programs: Error training and accuracy training,” J. Appl.Psychol, 69, 581-588(1984).

(14) Wexley, K.N. and Klimoski, R. “Performance appraisal: An up-date,” in Performance Evaluation, Goal setting, and Feedback, (edits. Ferris, G.R. and Rowland, K.M.) JAI Press, Inc., Greenwich CT (1990) pp. 1-45.

(15) DeNisi, A.S. and Williams, K.J., “Cognitive approaches to perfor-mance appraisal,” in Performance Evaluation, Goal Setting, and Feedback, (edits. Ferris, G.R. and Rowland, K.M.) JAI Press, Inc., Greenwich CT (1990) pp. 47-93.

(16) Landy, F.J. and Farr, J.L., The Measurement of Work Performance —Methods, Theory, and Applications. Academic Press, New York, NY (1983) pp. 22-23.

(17) Butterworth, J.S. and Reppert, E.H., “Auscultatory acumen in the general medical population,” JAMA, 174, 114-116(1960).

(18) Seegal, D. and Wertheim, A.R., “On the failure to supervise stu-dents’ performance of complete physical examinations,” ibid., 180, 476-477(1962).

(19) Hinz, C.F., “Direct observation as a means of teaching and evaluat-ing clinical skills,” J. Med. Educ., 41, 150-161(1966).

(20) Engel, G.L.,” Are medical schools neglecting clinical skills, “JAMA, 236, 861-63(1976).

(21) Stillman, P.L., May, J.R. and Meyer, D.M., “A collaborative effort to studymethodsofteachingphysicalexaminationskills,”J. Med. Educ., 56, 301-306(1981).

(22) Wray, N.P. and Friedland, J. A., “Detection and correction of house staff error in physical diagnosis,” JAMA, 249, 1035-1037(1983).

(23) Barker, K.N., “Data collection techniques: observation,” Am. J. Hosp. Pharm., 37, 1235-1243(1980).

(24) Thorndike R.L. and Hagen, E.P., Measurement and Evaluation in Psychology and Education, 4th ed., John Wiley & Sons. New York NY (1977) pg. 506-534.

(25) DeMers, J.L., “Observational assessment of performance,” inEvalu-ating Clinical Competence in the Health Professions, (edits. Morgan, M.K. and Irby, D.M.) C.V. Mosby, St. Louis MO (1978) pp. 89-115.

(26) Sax, G., Principles of Educational and Psychological Measurement and Evaluation, 2nd ed., Wadsworth Publishing Co., Belmont CA (1980) pp. 145-168.

(27) Srinivas, S. and Motowidlo, S.J., “Effects of raters’ stress on the dispersion and favorability of performance ratings,” J. Appl. Psychol., 72, 247-251(1987).

(28) Friedman, M. and Mennin, S.P., “Rethinking critical issues in performance assessment,” Acad. Med., 66, 390-395(1991).

(29) Shavelson, R. J. and Webb, N.M., Generalizability Theory: A Primer, Sage Publications, Newbury Park CA (1991) pp. 1-16.

(30) Anon., “Professional judgment in evaluation,” J. Nurs. Educ., 33, 243(1994).

(31) Green, R.L., “Sources of recency effects in free recall,” Psychol. Bull, 99, 221-228(1986).

(32) Kaplan, M.F., “Stimulus inconsistency and response dispositions in forming judgments of other persons,” J. Personality Soc. Psychol. 22, 58-64(1973).

(33) Borman, W.C., “Individual differences correlates of accuracy in

evaluating others’ performance effectiveness,” Appl. Psychol. Meas., 3, 103-115(1979).

(34) Cardy, R.L. and Kehoe, J.F., “Rater selective attention ability and appraisal effectiveness: The effect of a cognitive style on the accu-racy of differentiation among ratees,” J. Appl. Psychol., 69, 589-594(1984).

(35) Littlefield, J.H., “Developing and maintaining a resident rating system,” in How to Evaluate Residents, (edits. Lloyd, J.S. and Langlsey, D.G.) American Board of Medical Specialties, Chicago IL (1986) pp. 117-130.

(36) Woolliscroft, J.O., Howell, J.D., Patel, B.P. and Swanson, D.B., “Resident-patient interactions: The humanistic qualities of internal medicine residents assessed by patients, attending physicians, pro-gram supervisors, and nurses,” Acad. Med., 69, 216-224(1994).

(37) Seefeldt, M. and Blumberg, P., “Rating dental students-a compari-son of faculty and patient perspectives,” Eval. Health Prof, 7, 365-374(1984).

(38) Butterfield, P.S., Mazzaferri, E.L. and Sachs, L.A., “Nurses as evaluators of the humanistic behavior of internal medicine resi-dents,” J. Med. Educ., 62, 842-849(1987).

(39) Cope, D.W., Linn, L.S., Leahe, B.D. and Barrett, P.A., “Modifica-tion of residents’ behavior by preceptor feedback of patient satisfac-tion,”J. Gen. Int. Med., 1, 394-8(1986).

(40) Linn, L.S., DiMatteo, M.R., Cope, D.W. and Robbins, A., “Measur-ing physicians’ humanistic attitudes, values and beliefs,” Med. Care, 25, 504-515(1987).

(41) Banks, C.G. and Murphy, K.R., “Toward narrowing the research-practice gap in performance appraisal,” Personnel Psychol., 38, 335-345(1985).

(42) Cohen, G.S., Blumberg, P., Ryan, N.C. and Sullivan, P., “Do final grades reflect written qualitative evaluations of student perfor-mance?,” Teach. Learn. Med., 5, 10-15(1993).

(43) DaRosa, D.A., “Commentary on “Do final grades reflect written qualitative evaluations of student performance?,” ibid., 5, 16-17(1993).

(44) Miller, G.M., “Commentary on Assessment of clinical skills with standardized patients: State of the art,” ibid., 2, 77-78(1990).

(45) van der Vleuten, C.P.M. and Swanson, D.B., “Assessment of clinical skills with standardized patients: State of the art,” ibid., 2, 58-76(1990).

(46) Stillman, P.L., Swanson, D.B. and Smee, S., et al, “Assessing clinical skills of residents with standardized patients,” Ann. Intern. Med., 105, 762-771(1986).

(47) Vu, N.V., Barrows, H.S., Marcy, M.L., et. al., “Six years of compre-hensive, clinical, performance-based assessment using standardized patients at the Southern Illinois University School of Medicine,” Acad. Med., 67, 42-50(1992).

(48) Anderson, M.B., Stillman, P.L. and Wang, Y., “Growing use of standardized patients in teaching and evaluation in medical educa-tion,” Teach. Learn. Med., 6, 15-22(1994).

(49) Tamblyn, R., Abrahamowicz, M., Schnarch, B., et. al, “Can stan-dardized patients predict real-patient satisfaction with the doctor-patient relationship?,” ibid., 6, 36-44(1994).

(50) Colliver, J. A., Marcy, M.L., Vu, N.V., et al., “Effect of using multiple standardized patients to rate interpersonal and communication skills on intercase reliability,” ibid., 6, 45-8(1994).

(51) Shatzer, J.H., Wardrop, J.L., Williams, R.G. and Hatch, T.F., “Generalizability of performance on different-station-length stan-dardized patient cases,” ibid., 6, 54-58(1994).

(52) Barrows, H.S., An overview of the uses of standardized patients for teaching and evaluating clinical skills,” Acad. Med., 68, 443-451(1993).

(53) Colliver, J.A. and Williams, R.G., “Technical issues: Test applica-tion,” ibid., 68, 454-460(1993).

(54) Miller, G.E., “Conference summary,” ibid., 68, 471-474(1993). (55) Finke, L., Messmer, P., Spruck, M., et al., “Measuring clinical skills

via computer for RNs in a BSN-MSN educational mobility pro-gram,” in Measurement of Nursing Outcomes. Volume Three: Mea-suring clinical skills and Professional Development in Education and Practice, (edits. Waltz, C.F. and Strickland, O.L.) Springer Publish-ing Co., New York NY (1990) pp. 44-53.

(56) Ross, M., Carroll, G., Knight, J., et al., “Using the OSCE to measure clinical skills performance in nursing,” J. Adv. Nurs., 13, 45-56(1988).

(57) Bramble, K., “Nurse practitioner education: Enhancing performance through the use of the objective structured clinical assessment,” J. Nurs. Educ., 33, 59-65(1994).

(58) Tamblyn, R.M., Abrahamowicz, M., Berkson, L. et al., “Assessment of performance in the office setting with standardized patients,”

American Journal of Pharmaceutical Education Vol. 59, Fall 1995 245

Acad. Med., 10(Suppl), S22-S24(1992). (59) Noguchi, J. and Hill-Besinque, K.,” The use of standardized patients

in an objective structured clinical examination,” Am. J. Pharm. Educ., 56(Suppl), 83S(1992).

(60) Fielding, D.W., Page, G.G., Schulzer, M., et al., “Developing objec-tive structured clinical examinations for performance evaluation in pharmacy,” ibid., 57(Suppl), 101S(1993).

(61) Guidry, T.D. and Cohen, P. A.,” A practical examination for student assessment in an externship program,” ibid., 51, 280-284(1987).

(62) King, A.M., Perkowski-Rogers, L.C. and Pohl, H.S., “Planning standardized patient programs: Case Development, patient train-ing, and costs,” Teach. Learn. Med., 6, 6-14(1994).

(63) Reznick, R.K., Smee, S., Baumber, J.S., et al., “Guidelines for estimating the real cost of an objective structured clinical examina-tion,” Acad. Med., 68, 513-517(1993).

(64) Morrison, L. J. and Barrows, H.S., “Developing consortia for clinical practice examinations: The Macy Project,” Teach. Learn. Med., 6, 23-27(1994).

(65) Clem, J.R., Murry, D.J., Perry, P.J., et al, “Performance in a clinical pharmacy clerkship: computer-aided instruction versus traditional lectures,” Am. J. Pharm. Educ., 56, 259-263(1992).

(66) Greer, M.L., Kirk, K.W. and Crismon, M.L., “The development and pilot testing of a community-practice, clinical pharmacy decision-making assessment instrument,” Eval. Health Prof., 12, 207-231(1989).

(67) Boh, L.E., Pitterle, M.E., Wiederholt, J.B. and Tyler, L.S., “Devel-opment and application of a computer simulation program to enhance the problem-solving skills of students,” Am. J. Pharm. Educ., 51, 253-261(1987).

(68) MacKinnon, G.E., Pitterle, M.E., Boh, L.E. and DeMuth, J.E., “Computer-based patient simulations: Hospital pharmacists’ per-formance and opinions,” Am. J. Hosp. Pharm., 49, 2740-2745(1992).

(69) McKenzie, M.W., Kimberlin, C.L., Philips, E.W., et al, “Content, preparation, and formative evaluation of an interactive videodisc system to enhance communication skills in pharmacy students,” Am. J. Pharm. Educ., 57, 230-238(1993).

(70) Tobias, D.E., Speedie, S.M. and Kerr, R. A., “Evaluation of clinical competence through written simulations,” ibid., 42, 320-323(1978).

(71) Nelson, A.A., Bober, K.F. and Bashook, P.G., “Evaluation of a college-structured practical experience program,” ibid., 40, 232-236(1976).

(72) Elenbaas, R.M., “Evaluation of students in the clinical setting,” ibid., 40, 410-417(1976).

(73) Barrows, H.S., “The scope of clinical education,”J. Med. Educ., 61, 23-33(1986).

(74) Elstein, A.S., “Beyond multiple-choice questions and essays: The need for a new way to assess clinical competence,” Acad. Med., 68, 244-249(1993).

(75) Dillon, G.F. and Clyman, S.G.,”The computerization of clinical science examinations and its effect on the performances of third-year medical students,” ibid., 67, S66-S68(1992).

(76) Richards, B.F., Philp, E.B. and Philp, J.R., “Scoring the objective structured clinical examination using a microcomputer,” Med. Educ., 23, 376-380(1989).

(77) Case, S.M. and Swanson, D.B.,”Extended-matching items: A prac-tical alternative to free-response questions,” Teach. Learn. Med., 5, 107-115(1993).

(78) Van Susteren, T.J., Cohen, E.B., Simpson, D.E., “Alternate-choice test items: Implications for measuring clinical judgment,” ibid., 3, 33-37(1991).

(79) Blackwell, T. A., Ainsworth, M.A., Dorsey, N.K., et al.,” A compari-son of short-answer and extended-matching question scores in an objective structured clinical exam,” Acad. Med., 66(Suppl 9). S40-S42(1991).

(80) Ainsworth, M.A., Turner, H.E., Solomon, D.J., Callaway, M.R., “Innovation in conducting and scoring a clerkship objective struc-tured clinical examination,” Teach. Learn. Med., 6, 64—67(1994).

(81) Abrahamson, S., “Assessment of student clinical performance,” Eval. Health Prof., 8, 413-427(1985).

(82) Gordon, M.J., “Self-assessment programs and their implications for health professions training,” Acad. Med., 67, 672-679(1992).

(83) Gordon, M.J., “A review of the validity and accuracy of self-assessments in health professions training,” ibid., 66, 762-769(1991).

(84) Arnold, L., Willoughby, T.L. and Calkins, E.V., “Self-evaluation in undergraduate medical education: A longitudinal perspective,” J. Med. Educ., 60, 21-28(1985).

(85) O’Sullivan, P.S., Pinsker, J. and Landau, C, “Evaluation of strate-gies selected by residents: The roles of self-assessment, training

level, and sex,” Teach. Learn. Med., 3, 101-107(1991). (86) Abbott, S.D., Carswell, R., McGuire, M. and Best, M., “Self-

evaluation and its relationship to clinical evaluation,” J. Nurs. Educ., 27, 219-224(1988).

(87) Calhoun, J.G., Woolliscroft, J.O., Ten Haken, J.D., et al, “Evaluat-ing medical student clinical skill performance — Relationships among self, peer, and expert ratings,” Eval. Health Prof, 11, 201-212(1988).

(88) Calhoun, J.G., Ten Haken J.D. and Woolliscroft, J.O., “Medical students’ development of self- and peer-assessment skills: A longi-tudinal study,” Teach. Learn. Med., 2, 25-29(1990).

(89) Herbert, W.N., McGaghie, W.C., Droegemueller, W., et at, “Stu-dent evaluation in obstetrics and gynecology self- versus depart-mental assessment,” Obstet. Gynecol, 76, 458-461(1990).

(90) Fincher, R.M. and Lewis, L.A., “Learning, experience, and self-assessment of competence of third-year medical students in per-forming bedside procedures,” Acad. Med., 69, 291-295(1994).

(91) Woolliscroft, J.O., TenHaken, J., Smith, J. and Calhoun, J.G., “Medical students’ clinical self-assessments: comparisons with ex-ternal measures of performance and the students’ self-assessments of overall performance and effort,” ibid., 68, 285-294(1993).

(92) Briceland, L.L., Kane, M.P. and Hamilton, R.A., “Evaluation of patient-care interventions by PharmD clerkship students,” Am. J. Hosp. Pharm., 49, 1130-1132(1992).

(93) Mason, R.N., Pugh, C.B., Boyer, S.B. and Stiening, K.K., “Comput-erized documentation of pharmacists’ interventions,” ibid., 51, 2131-2138(1994).

(94) Brown, G., “Assessing the clinical impact of pharmacists’ interven-tions,” ibid., 48, 2644-2647(1991).

(95) Kane, M.P., Briceland, L.L. and Hamilton, R.A., “Solving drug-related problems in the professional experience program,” Am. J. Pharm. Educ., 57, 347-351(1993).

(96) Arter, J. and Spandel, V., “NCME instructional module: Using portfolios of student work in instruction and assessment,” Educa-tional Meas.: Issues and Practice, 11, 36-44(1992).

(97) Jensen, G.M. and Saylor, C. “Portfolios and professional develop-ment in the health professions,” Eval. Health Prof., 17, 344-357(1994).

(98) Schon, D., “The theory of inquiry: Dewey’s legacy to education,” Curriculum Inquiry, 22, 119-140(1992).

(99) Wolf, K., Bixby, J., Glen J. and Gardner, H., “To use their minds well: Investigating new forms of student assessment,” Rev. Res. Educ., 17, 31-73(1991).

(100) Metheny, W.P. and Carline, J.D., “The problem student: A com-parison of student and physician views,” Teach. Learn. Med., 3, 133-137(1991).

(101) Tonesk, X., “Clinical judgment of faculties in the evaluation of clerks,” J. Med. Educ., 58, 213-214(1983).

(102) Tonesk, X. and Buchanan, R.G., “Faculty perceptions of current evaluation systems,” ibid., 60, 573-576(1985).

(103) Tonesk, X. and Buchanan, R.G., “An AAMC pilot study by 10 medical schools of clinical evaluation of students,” ibid., 62, 707-718(1987).

(104) Edwards, J.C., Plauche, W.C., Vial, R.H., et al., “Two ways to assess problems of evaluating third-year and fourth-year medical students: A case study,” Acad. Med., 64, 463-467(1989).

(105) Hunt, D.D., Carline, J., Tonesk, X., et al, “Types of problem students encountered by clinical teachers of clerkships,” Med. Educ., 23, 14-18(1989).

(106) Orchard, C.O., “The nurse educator and the nursing student: A review of the issue of clinical evaluation procedures,” J. Nurs. Educ., 33, 245-255(1994).

(107) Irby, D.M. and Milam, S., “The legal context for evaluating and dismissing medical students and residents,” Acad. Med., 64, 639-643(1989).

(108) Helms, L.B. and Helms, C.M., “Forty years of litigation involving medical students and their education: I. General educational is-sues,” ibid., 66, 1-7(1991).

(109) Helms, L.B. and Helms, C.M., “Forty years of litigation involving residents and their training: I. General programmatic issues,” ibid., 66, 649-655(1991).

(110) Helms, L.B. and Helms, C.M., “Forty years of litigation involving residents and their training: II. Malpractice issues,” ibid., 66, 718-725(1991).

(111) Fassett, W.E. and Olswang, S.G., “Affiliate faculty and student evaluation, discipline and dismissals,” Am. J. Pharm. Educ., 55, 211-218(1991).

(112) Klein, R.H. and Babineau, R., “Evaluating the competence of trainees: It’s nothing personal,” Am. J. Psychiatry., 131, 788-

246 American Journal of Pharmaceutical Education Vol. 59, Fall 1995

791(1974). (113) Brinko, K.T., “The practice of giving feedback to improve teaching.

What is effective?” J. Higher Educ., 64, 574-593(1993). (114) Baron, R.A., “Negative effects of destructive criticism: Impact on

conflict, self-efficacy, and task performance,” J. Applied Psychol., 73, 199-207(1988).

(115) Ilgen, D.R., Fisher, C.D. and Taylor, M.S., “Consequences of indi-vidual feedback on behavior in organizations,” ibid., 64, 349-371(1979).

(116) Rezler, A.G., Anderson, A.S., “Focused and unfocused feedback and self-perception,” J Educational Res., 65, 61-64(1971).

(117) Wigton, R.S., Patil, K.D. and Hoellerich, V.L., “The effect of feed-back in learning clinical diagnosis,” J. Med.Educ., 61, 816-822(1986).

(118) Davies, D. and Jacobs, A., “Sandwiching complex interpersonal feedback,” Small Group Behavior, 16, 387-396(1985).

(119) Cohen, G.S. and Blumberg, P., “Investigating whether teachers should be given assessments of students made by previous teach-ers,” Acad. Med., 66, 288-289(1991).

(120) Commission to Implement Change in Pharmaceutical Education, “Background paper II: Entry-level, curricular outcomes, curricular content and educational process,” Am. J. Pharm. Educ., 57, 377-385(1993).

(121) Andrew, B.J., “A technique for assessment of pharmacy students skills in patient interviewing,” ibid., 37, 290-299(1973).

(122) Warner, D., “Evaluation of a pilot pharmacy clerkship,” ibid., 34, 256-264(1970).

(123) Speranza, K.A., “A diary study of the socialization process in a hospital clinical experience course,” ibid., 39, 24-30(1975).

(124) McKenzie, M. W., Johnson, S.M. and Bender, K.J., “A competency-based, serf-instructional module on medication history interviewing for pharmacy students: Rationale, description and formative evalu-ation,” ibid., 41, 133-142(1977).

(125) Tobias, D.E., Michocki, R.J. and Edmondson, W.H., “Evaluation of students in a competency-based undergraduate clinical clerkship,” ibid., 42, 31-34(1978).

(126) Pancorbo, S., Holloway, R.L., McNeiley, E. and McCoy, H.G., “Development of an evaluative procedure for clinical clerkships,” ibid., 44, 12-16(1980).

(127) Friend, J.R., Wertz, J.N., Hicks, C. and Billups, N.F., “A multi-facetted approach to externship evaluation,” ibid., 50, 111-126(1986).

(128) Roffman, D.S., Tobias, D.E. and Speedie, S.M., “Validation of written simulations as measures of problem solving for pharmacy students,” ibid., 44, 16-24(1980).

(129) Carter, R.A., Bennett, R.W., Black, C.D. and Blank, J.W., “Use of simulations as performance standards for external degree doctor of pharmacy students,” ibid., 49, 119-123(1985).

(130) Trinca, C.E., “The center for the advancement of pharmaceutical care education: Translating commitment and concern to the com-munity,” ibid., 57(Suppl),7S-9S(1993).

(131) Short, J.P., “The importance of strong evaluation standards and procedures in training residents,” Acad. Med., 68, 522-525(1993).

(132) Zelenock, G.B., Calhoun, J.G., Ffockman, E.M. and Youmans, L.C., “Oral examinations: Acutal and perceived contributions to surgery clerkship performance,” Surgery, 97, 737-743(1985).

(133) Smith, R.M., “The triple-jump examination as an assessment tool in the problem-based medical curriculum at the University of Ha-waii,” Acad. Med., 68, 366-372(1993).

(134) Wong, J., Wong, S. and Richard, J., “Implementing computer simulations as a strategy for evaluating decision-making skills of nursing students,” Comput. Nurs., 10, 264-269(1992).

(135) Kroboth, F.J., Hanusa, B.H. and Parker, S., “The inter-rater reli-ability and internal consistency of a clinical evaluation exercise,” J. Gen. Intern. Med., 7, 174-179(1992).

(136) Dunnington, G., Reisner, L., Witzke, D. and Fulginiti, J., “Struc-tured single-observer methods of evaluation for the assessment of ward performance on the surgical clerkship,” Am. J. Surg., 159, 423-426(1990).

(137) Premi, J., “An assessment of 15 years’ experience in using videotape review in a family practice residency,” Acad. Med., 66, 56-57(1991).

(138) Rassin, G.M. and McCormick, D.P., “Use of video review to evalu-ate house staff performance of well-baby examinations: A prelimi-nary investigation,” Teach. Learn. Med.,4, 168-172(1992).

(139) Anderson, E., “Co-assessment as a unique approach to measuring students’ clinical abilities,” J. Nurs. Educ., 29, 42-43(1990).

(140) Littlefield, J.H., Harrington, J.T., Anthracite, N.E. and Garman, R.E., “A description and four-year analysis of a clinical clerkship evaluation system,” J. Med. Educ., 56, 334-340(1981).

(141) Noel, G.L., Herbers, J.E., Caplow, M.P., et al., “How well do internal medicine faculty members evaluate the clinical skills of residents?,” Ann. Intern. Med., 117, 757-765(1992).

(142) Kaiet, A., Earp, J.A. and Kowlowitz, V., “How well do faculty evaluate the interviewing skills of medical students?,” J. Gen. Intern. Med., 7, 499-505(1992).

(143) Cariine, J.D., Paauw, D.S., Thiede, K.W., and Ramsey, P.G., “Fac-tors affecting the reliability of ratings of students’ clinical skills in a medicine clerkship,” ibid., 7, 506-510(1992).

(144) Dielman, T.E., Hull, A.L. and Davis, W.K., “Psychometric proper-ties of clinical performance ratings,” Eval. Health Prof., 3, 103-117(1980).

(145) Boodoo, G.M. and O’Sullivan, P.S., “Assessing pediatric clerkship evaluations using generalizability theory,” ibid, 9, 467-486(1986).

(146) Boodoo, G.M. and O’Sullivan, P., “Obtaining generalizability coef-ficients for clinical evaluations,” ibid., 5, 345-358(1982).

(147) O’Donohue, W.J. and Wergin J.F., “Evaluation of medical students during a clinical clerkship in internal medicine,” J. Med. Educ., 53, 55-58(1978).

(148) Wood, V., “Evaluation of student nurse clinical performance — A continuing problem,” Int. Nurs. Rev., 29, 11-18(1982).

(149) Thompson, W.G., Lipkin, M., Gilbert, D.A., et al., “Evaluating evaluation: Assessment of the American Board of Internal Medi-cine resident evaluation form,” J. Gen. Int. Med., 5, 214-217(1990).

(150) McLeod, P.J., “The student case presentation: An integral compo-nent of the undergraduate curriculum,” Teach. Learn. Med., 3, 113-116(1991).

(151) McCain, G.A., Molineux, J.E., Pederson, L. and Stuart, R.K., “Consultation letters as a method for assessing in-training perfor-mance in a department of medicine,” Eval. Health Prof., 11, 21-42(1988).

(152) Woolliscroft, J.O., Calhoun, J.G., Beauchamp, C, et al., “Evaluat-ing the medical history: Observation versus write-up review,” J. Med. Educ., 59, 19-23(1984).

(153) Kane, R.L., Leigh, E.H., Feigal, D.W. et al., “A method for evaluat-ing patient care and auditing skills of family practice residents,” J. Family Pract., 2, 205-210(1975).

(154) Ognibene, A.J., Jarioura D.G., Illera, V.A., et al., “Using chart reviews to assess residents’ performances of components of physical examinations: A pilot study,” Acad. Med., 69, 583-587(1994).

(155) Goldman, R.L., “The reliability of peer assessments—A meta-analysis,” Eval. Health Prof., 17, 3-21(1994).

(156) Gerrish, K., “An evaluation of a portfolio as an assessment tool for teaching practice placements,” Nurse Educ. Today. 13, 172-179(1993).

(157) Callister, L.C., “The use of student journals in nursing education: Making meaning out of clinical experience,” J. Nurs. Educ., 32, 185-186(1993).

American Journal of Pharmaceutical Education Vol. 59, Fall 1995 247