akoranga issue 8 (november 2012)

24
AKORANGA ISSUE 08 2012: Evaluation The Periodical about learning and teaching

Upload: adon-moskal

Post on 30-Mar-2016

215 views

Category:

Documents


0 download

DESCRIPTION

Akoranga issue 8 (November 2012)

TRANSCRIPT

  • AKORANGA

    ISSUE 08 2012: Evaluation

    The Periodical about learning and teaching

  • ISSUE 08 2012: Evaluation

    02

    AKORANGAThe Periodical about learning and teaching

    03 Editorial Kerry Shephard, HEDC

    04 Senates recommendations on course and teaching evaluation Sarah Stein, HEDC

    05 How should we evaluate the impact of e-interventions? Swee-Kin Loke, HEDC

    08 Using student data to improve teaching Clinton Golding, HEDC

    11 Student evaluations: closing the loop Lynley Deaker, Sarah Stein, Jo Kennedy, HEDC

    12 1000 words to change an institution? Kerry Shephard, HEDC

    15 Interpreting the results from a student feedback survey David Fletcher, Department of Mathematics and Statistics

    16 Evaluations: perspectives from end-users Angela Mclean, HEDC

    18 No, we dont run popularity polls! Jo Kennedy, HEDC

    21 QAU learning and teaching Romain Mirosa, Quality Advancement Unit 22 Beyond centralised student evaluation Swee-Kin Loke, HEDC 23 Akoranga Evaluation

    This document is printed on an environmentally responsible paper produced using FSC Certified 100% Post Consumer recycled, Process Chlorine Free (PCF) pulp.

  • ISSUE 08 2012: Evaluation Oh dear, this edition of Akoranga is very late. The editorial team decided some time ago that its theme would be evaluation of teaching. Late in 2010, Senate proposed some substantial changes to evaluation processes at the University and asked HEDC to put these into effect. HEDC asked for help and a working group was established, with representatives from each of the divisions and from the Department of Mathematics and Statistics, to recommend how Senates proposals might operate. Senate was concerned about our possible overdependence on the individual teaching feedback survey and the lack of use of the course feedback survey. Senate was also keen to ensure that these feedback instruments are used appropriately and constructively for developmental purposes by teachers and by departments, and to allow evidence-based decisions on the quality of our teaching. I chaired the working group, and somewhat optimistically thought it would all be over by Christmas (thats last Christmas). It has been a much longer task for all involved.

    We thought that we would find some great synergy between this edition of Akoranga and the recom-mendations of Senate and the deliberations of the working group. Indeed we have, but Akoranga has had to work to the same timescales as the working group, hence the delay.

    The editorial team hopes that the wait has been worthwhile. Within this edition we have provided a synopsis of Senates recommendations on the use of student feedback and some insights into the task of the working group. We have given colleagues within HEDCs Evaluations Unit (Jo Kennedy) and the Quality Advancement Unit (Romain Mirosa) free rein to describe their contributions to the evaluation of teaching. Sarah Stein, Lynley Deaker and Jo Kennedy have described some new research in this area, emphasising a primary rationale for evaluating teaching to get better at it. Clinton Golding tells us how he uses student feedback to help to develop his teaching, David Fletcher provides a statisticians perspective on interpreting quantitative student feedback, and Swee Kin Loke addresses the particular problems of developing teaching that involves the use of computers. Kerry Shephard asks if we could use our extensive evaluation processes to address some broader questions, such as how well our students achieve the graduate attributes that we plan for them. Angela McLean has added some student perspectives to keep us on our toes, and, to practice what we preach, we end this edition with a request for readers to help us to evaluate the impact that Akoranga may be having on learning and teaching at the University of Otago.

    Akoranga EditorialKerry Shephard

    03

    AKORANGA

    Editor: Kerry Shephard Sub Editor: Candi Young & Swee-Kin Loke Design: Gala Hesson

  • In 2010, Senate discussed the approaches that the University uses to evaluate its teaching and its courses and agreed on a set of recommendations that required HEDC to investigate, recommend and initiate changes.

    The Senates recommendations were

    Recommendation 1: That Course Evaluation Questionnaires normally be used by all departments or programmes on a three-year cycle. Processes will be established to monitor this requirement, which should be part of regular evaluation policy;

    Recommendation 2: That a set of indicators be used as guidelines to support and develop a strong teaching culture within a department;

    Recommendation 3: That when Course Evaluation Questionnaire reports are submitted as evidence for confirmation, promotion or appraisal processes, they may be accompanied by a context form that includes information on the staff members contribution to the course and issues outside the staff members control;

    Recommendation 4: That HEDC investigates the desirability and feasibility of having one evaluation system that is fully customisable and is available in both hard copy and online formats;

    Recommendation 5: That HEDC continues to explore evaluation policy and practice at Otago and monitors the impact of any changes.

    In mid-2011, HEDC established a Working Group, with membership drawn from across the University, to deliberate on Senates recommendations and to prepare a series of plans to put the recommended changes into effect. The Groups advice will underpin and inform HEDCs response to the Senate recommendations. That response will be led by HEDCs Evaluation Research and Development group - a core group in HEDC that focusses on running an efficient student evaluation system for the University, as well as providing pro-fessional development support, raising awareness about evaluation and engaging in research in, through and about evaluation.

    I have not been a member of the Working Group, but am aware of the myriad issues that can arise when the topic of student evaluations is raised. I do know that the Working Group has been discussing many of these issues in their consideration of the tasks they have been set. As coordinator of HEDCs Evaluation Research and Development Group, here are some of my comments about some issues concerning student evaluation.

    I have heard many people say that student evaluation systems are flawed because the questionnaires only elicit information about student experience, and do not capture all aspects of a whole course or teaching. Student evaluation questionnaires, such as the ones used in our (and other institutions) centralised student evaluation system, do not stand alone, so they do not pretend to be able to gather everything one needs to know about teaching or courses. They ask students to provide their responses about their experiences of teaching and the courses with which they have been associated.

    Senates recommendations on Course and Teaching Evaluation

    Sarah Stein

    04

  • 05

    The student view, gathered through the student evaluation questionnaires is just one type of data gathered from one (important) group of stakeholders. Knowing how students perceive our courses and our teaching allows us to understand more about the impact we are having, which we need to know if we are serious about developing as educators. To gain a fuller, rounder understanding of our teaching and courses, we need to gather data from other sources to interpret in conjunction with the data we gather from our students.

    Evaluation data from the student evaluation questionnaires should be seen as the start of the con-versation, not the end. Monitoring (gathering data) and development (interpreting and acting on the data) are part of the same process; and one without the other is an indication of a flawed system.

    Unfortunately, because so much emphasis is placed upon the monitoring aspect of student evaluation data in this University for the confirmation, promotion and annual performance review processes, there is a tendency for student evaluation to be seen as a summative exercise that is essentially punitive, private and individual. We have some wonderful guidelines about student evaluation and the use of the data we gather at this University. The recent research project I have been undertaking has shown me how far we are in advance of many tertiary institutions in this regard. It is a shame that we do not capitalise upon the guidance they provide us.

    There are concerns voiced by many that the Individual Teacher questionnaire isolates the teacher from the whole curriculum in an artificial way, and, because the questionnaire scores are used in con-firmation and promotion processes, issues of confi-dentiality and secrecy surround the results. On the other hand, there are standard questions in our Individual Teacher questionnaire, and this means that comparisons about teaching effectiveness can be made over time. Departments, Divisions and the Institution can use and share the aggregated data from the standard questions in the questionnaires to monitor, report and respond to the status and changes in teaching.

    The current Course Evaluation questionnaire does not have any standard questions, so supports very well the need for teachers to focus on specific aspects of their courses. Because results are shared with HoDs and Course Coordinators, there are some bases for

    the results of the Universitys Course Evaluation ques-tionnaires to be part of a collaborative and collegial process of ongoing and active review and development. However, these evaluations do not include any standard questions, and without deliberate decisions about including relevant questions each time an evaluation is run, they provide no opportunity for programmes, Departments, Divisions or the Institution to know, over time, how well courses are performing.

    Individual teachers, Departments, Divisions and the whole institution should want to gather and see the data about courses and teaching. Each group has a right and a responsibility to be gathering such data to demonstrate and understand the status and process of the development of teaching and courses over time. Without this data, engagement in development and improvement cannot happen.

    I hope that one of the Working Groups outcomes will be recommendations that could form

    core elements to be included in a University policy about evaluation

    (perhaps as it relates to Recom-mendations 2 and 5, above). Such a policy would define and describe student evaluation processes and pract ices and their role in enhancing teaching and learning. I hope, in the light of research as well as experience of the issues at

    Otago, that such a policy would separate the auditing/quality

    assurance process from the de-velopmental process, while simul-

    taneously explaining and describing how both monitoring quality and active

    engagement in development are parts of the same improvement process. I also hope that student evaluation is seen as a key aspect of engagement as a community in the ongoing pursuit of better teaching, better courses and enriched student learning. I hope that within that policy the notion of closing the loop at individual teacher, programme, Department, Division and whole institution levels will feature very strongly.

    The policy should provide the basis for systems, processes and practices that are explicit, aligned, transparent, flexible, useful and meaningful.

    A big emphasis of our ongoing activity in HEDC should be placed on the professional development of groups and individuals across the University (including students) to inform and guide understanding about evaluation, including what it means to engage meaningfully, actively and collaboratively with student evaluation.

    Senates recommendations on Course and Teaching Evaluation

  • Most educators are interested in improving student learning, hence it is no surprise that educational tech-nologists over the years have been preoccupied with questions of the type does viewing instructional films improve learning? or does using the iPad in class lead to better learning outcomes? For example, as long ago as 1972, Ackers and Oosthoek reported how they attempted to transfer knowledge to [their] students by means of tape recorders (p. 137). They first divided their students into two groups, and then gave the tape group tapes containing information in economics that students could listen to at their own pace. The lecture group followed ordinary economics lectures. At the end of the experiment, the authors evaluated the use of tape recorders by comparing the two groups mean scores in an economics examination. Similar comparative studies were conducted in more recent times: for example, Knox (1997) evaluated the impact of video-conferenced versus live lectures in actuarial science.

    Evaluations of this type are often called media comparison studies because they compare how much or how well students have learned from a lesson presented via a new medium (e.g. tape recorder) versus existing instruction (e.g. lectures). Media comparison studies used to be prevalent in the field of educational technology (Reiser, 2001), but have fallen into disrepute over the last two decades for the following reasons: firstly, media comparison studies conflate media (e.g. tape recorder) and instructional method (e.g. self-paced learning) (Clark, 1994). For example, in the economics by tape study described above, any difference in student scores could be attributed to the activity of self-paced

    How should we evaluate the impact of e-interventions?

    learning rather than to the use of tape recorders, reflecting a conceptual flaw in media comparison studies (Warnick & Burbules, 2007). Secondly, research over 40 years has found that technologys impact on student achievement (versus no technology) tends to range from low to moderate (Bernard et al., 2004; Tamim et al., 2011). Given these findings, Professor Emeritus Tom Reeves (2011) jokingly advised Otago staff, in the event that more PhD students asked to compare online learning with traditional instruction, to chase them out of your office [57:10] because we already know that tech-nologys impact is likely to be modest.

    So where does that leave educators who would like to evaluate the impact of technology on student learning?

    Clark (1994) suggests that we should reframe the evaluation around the instructional method and not the technology. Because our primary goal is in improving the way we teach (not in inserting a piece of technology into our teaching practice), our evaluation

    should relate directly to the way we teach. In fact, I believe that we should conceive and evaluate e-inter-ventions like any other pedagogical innovation (e.g. self-paced learning, problem-based learning or PBL). As such, the question at the start of this paragraph needs reframing: educators using technology should be less concerned with evaluating the impact of our technology use and more concerned with evaluating the impact of learning activities mediated by technology. This is a subtle but important difference in how we approach the improvement of teaching and learning, and some suggest that too many educators have fore-grounded technology over pedagogy over the years (Reeves, McKenney, & Herrington, 2011).

    Swee-Kin Loke

    We already know that technologys impact is likely to be modest

    06

  • How should we evaluate the impact of e-interventions?

    When we conceive e-interventions as pedagogical innovations, we gain some useful insights. In their meta-synthesis When is PBL more effective, Strobel and van Barneveld (2009) reported how PBL in medical education led to better outcomes in clinical knowledge and skills, whereas traditional instruction resulted in better outcomes in basic science knowledge. Perhaps, the important impact of e-interventions lies not in how much students learn but in how they learn (Kozma, 1994). For example, colleagues from the Otago School of Pharmacy and Higher Education Development Centre (HEDC) sought to identify qualitative differences in student learning processes between paper- and computer simulation-based workshops (Loke, Tordoff, Winikoff, McDonald, Vlugter, & Duffull, 2011). We evaluated the project by teaching, observing and audio-recording two paper- and two simulation-based workshops. Our data showed that students in the simulation-based workshops learned that they needed to frame the clinical problem themselves, rather than rely on having all the necessary information laid out on paper. If Pharmacy students are expected to frame clinical problems themselves, and if they can develop that problem-solving skill in a simulation-based workshop, then we have sound pedagogical reasons to continue running the sim-ulation-based workshops for our students benefit.

    ReferencesAckers, G. W., & Oosthoek, J. K. (1972). The evaluation of an audio-tape mediated course. British Journal of Educational Technology, 3(2), 136146.Bernard, R. M., Abrami, P. C., Lou, Y., Borokhovski, E., Wade, A., Wozney, L., & Wallet, P. A. (2004). How does distance education compare with classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research, 74(3), 379439.

    Clark, R. E. (1994). Media will never influence learning. Educational Technology Research and Development, 42(2), 2129.Knox, D. M. (1997). A review of the use of video conferencing for actuarial education a threeyear case study. Distance Education, 18(2), 225235. Kozma, R. B. (1994). Will media influence learning? Reframing the debate. Educational Technology Research and Development, 42(2), 719. Loke, S. K., Tordoff, J., Winikoff, M., McDonald, J., Vlugter, P., & Duffull, S. (2011). SimPharm: How pharmacy students made meaning of a clinical case differently in paper- and simulation-based workshops. British Journal of Educational Technology, 42(5), 865874.Reeves, T. C. (2011). Authentic tasks in online and blended courses: how to evaluate whether the design is effective for learning. University of Otago. Retrieved from http://unitube.otago.ac.nz/view?m=9WOR29G9iGCReeves, T. C., McKenney, S., & Herrington, J. (2011). Publishing and perishing: The critical importance of educational design research. Australasian Journal of Educational Technology, 27(1), 5565.Reiser, R. A. (2001). A history of instructional design and technology: Part I: A history of instructional media. Educational Technology Research and Development, 49(1), 5364.Tamim, R. M., Bernard, R. M., Borokhovski, E., Abrami, P. C., & Schmid, R. F. (2011). What forty years of research says about the impact of technology on learning: A second-order meta-analysis and validation study. Review of Educational Research, 81(1), 428.Warnick, B. R., & Burbules, N. C. (2007). Media comparison studies: problems and possibilities. Teachers College Record, 109(11), 24832510.

    07

  • Clinton Golding

    using student data to improve teaching

    08

  • Clinton Golding

    using student data to improve teaching

    In 2006 the average student evaluation result for my subjects was 4.6 out of 5 for overall quality of learning (I was teaching at the University of Melbourne where 5 was best). On the basis of student comments, formal and informal, as well as their assessment results, I hypothesised that students were unclear about the requirements of their final assessment they just did not get what I was asking them to do. I then devised and implemented a plan for helping my students to better understand the assessment task. In one teaching session I explained the assessment criteria, we identified what would be written in an assignment that met these criteria, and the students assessed a draft of their own work using these criteria. At the end of the semester I found that my students had produced much higher quality work in their assessment (corroborated by a moderator), and I received a higher evaluation score of 4.8. Each year I consulted my student evaluations and refined my explanation of what I wanted my students to do, until I left Melbourne in 2010 with an average evaluation score of 5 out of 5.

    We are encouraged to evaluate our courses, subjects and teaching so we can confirm our jobs, get promoted, and assure the quality of teaching and learning in the university, yet we have little guidance about how we can use the evaluation results to improve our teaching. Here is how I do it, using student survey data.I start by thinking about how I will approach the evaluation data I already have. My attitude towards this data, particularly student surveys, allows me to use it to improve my teaching.

    1. View the data as formative information

    I see student survey data as a tool to help me improve, rather than as a mark or grade for me, my teaching or my course. If I do not make this shift in perception, if I view the data solely as a final judgment a pass or fail on my teaching then I cannot use it for development or improvement. I renounce Whats wrong with my teaching? and embrace How can I develop my teaching? It is also important that I treat the data as something that can be improved, no matter how good or bad my evaluation results.

    If I wanted to judge the overall quality of my teaching, perhaps for promotion purposes, then I would acknowledge mitigating factors. For example, if I was teaching a large, compulsory course, I might argue that my student survey results do not directly reflect the quality of my teaching or my course. But these mitigating factors are a distraction when I want to develop my teaching, and they can easily become an excuse for why I cannot improve. So when my aim is to develop my teaching, I simply ask myself: Given this data, what can I do to improve the learning for my students?

    2. An approximation about student learning

    I treat the student survey data as a window on student learning, an indication of the extent to which students benefitted from the course I taught. So, if students said that they found a lecture boring and irrelevant, I would not take this to be a judgment about myself, my teaching, or even my students. But I would take it as indication that there is a barrier to their learning, and I ask: How can I change my teaching and my course so that they no longer find it boring?

    3. Approach the evaluation data as an inquiry

    My evaluation results provide me with a source for hypotheses about what might be improvable, and an approximate means to test these hypotheses.

    09

  • Clinton deliberately made time for students to clarify and discuss the rubrics intended for assessment purposes. He even illustrated how the assessment criteria were applied when he gave us feedback on our mini assignments, and arranged for peer review of the assignment drafts. This was most helpful when we eventually embarked on our final, large assignment because we were by then very confident and clear of what he would be looking for in the assessment. We were thus encouraged and motivated to work towards excellence in our major assignment! We wanted to do well, not just for ourselves but also for him! - Student comment 2009

    I hypothesise why I received my particular student survey results, and therefore what might be improved. I use triangulated data from various sources formal student evaluations, my own classroom observations, assessment results, informal feedback from students, peer observations as well as my understanding of teaching and learning. From my hypothesis, I infer various changes in teaching that are likely to improve my evaluation scores, which I then implement. Then I gather more evaluation data to see if my teaching has improved. I generally use the student evaluations of the overall quality of learning as a proxy measure more specific evaluation data can help to

    identify the areas that could be developed, but I tend to judge improvement by the overall student survey score. If that score increases after my change in teaching, and

    nothing else is lowered, then this suggests that my teaching has improved. If I continue to receive higher evaluation results year after year, then this gives me reason to think my hypothesis has been confirmed and my teaching or my course has improved (though I have to be careful to distinguish merely making students happier, and genuinely making the course better and improving their learning).

    10

    4. Read between the lines

    When I use student survey evaluation data to improve my teaching I am conscious that what students say is the issue may not be the issue. When they say a session is boring, then something is a problem for them, but that problem may not be a boring lecture. Perhaps the real problem is that when I gave my students feedback about an assessment task, I had not explained how this would enable them to understand the course and do better in the final exam, so they thought I was wasting their time, which they expressed as That was boring! If so, this suggest that I might improve my course by clarifying the importance of the feedback.

    5. Take a broad view of teaching

    I also approach the evaluative data with an expanded view of teaching. Teaching is everything I do to support student learning, including writing course documents and assessment tasks, selecting readings and text books, offering online support, office hours and email contacts, scheduling lectures and arranging the desks in tutorials. This broad view of teaching better enables me to hypothesise about what I might do to improve my teaching and my courses.

    PEER OBSERVATIONS

  • The project highlighted the need to see the process of evaluation as one that is complementary to monitoring, demonstrating and assuring quality. This implies a need to educate about the rationale for using student evaluations and to focus on engaging meaningfully with evaluation processes.

    The recommendations highlight the need for a closing the evaluation loop framework. Such a framework would be underpinned by a view of evaluation that addresses both developmental and auditing needs. It would focus on the role and responsibilities of groups and individuals to provide, interpret and act upon evidence gathered through student evaluation; to plan for on-going personal, professional and course development; and to involve the whole of the institution in a collaborative way to enhance the quality of the learning and teaching environment. This will ensure that there is a match between the conceptualisations of evaluation and how they are expressed in institutional

    policy, or enacted by institutions and individual teachers, which will underpin a move to developing teaching and evaluation as a col-laborative and organic endeavor that is complementary to a commitment to good practice.

    A system that maps plainly and precisely how accountability and development work alongside each other, including how student evaluation plays a part, will assist in reducing the tensions noted by some teaching staff in the study, and reinforce the value for those who already actively engage with, and welcome, the feedback.

    The full report can be viewed at: http://tiny.cc/c9icmw The Summary Guide can be viewed at: http://tiny.cc/69icmw

    Lynley Deaker, Sarah Stein, Jo Kennedy

    Student Evaluations: Closing the Loop

    Students have very valid opinions and need to know that what they think matters and is taken seriously, just as I seek out input for course content from them so that they find the sessions relevant, useful and meaningful, they should also see teaching and course evaluations as part of this natural process - A University of Otago participant

    Centralised systems of student evaluation have become normative practice in higher education institutions. Research has indicated that how teachers perceive student evaluations within their context, and the role they personally play within the evaluation process, determines the nature and degree to which they engage in evaluation.

    A recent research project led Dr Sarah Stein (HEDC) investigated the perceptions tertiary teachers in the New Zealand education context hold about student evaluations, the factors affecting their views and how they engage with evaluations. The project was funded by an Ako Aotearoa National Project Fund (2009) grant. The research drew on a combination of quan-titative and qualitative data: 1,065 staff from three institutions (Otago and Waikato Universities and Otago Polytechnic) participated in the questionnaire, and 60 volunteers were interviewed. Almost 75% of respondents from across the three institutions claimed they regarded gathering student evaluation data as personally worthwhile, the meanings attributed to worth highlighting both contributing and limiting factors.

    Factors contributing to teachers sense of worth of student evaluation data included Informs course/teacher

    development Helps identify student learning

    needs/experiences Informs and provides evidence

    for use in quality/summative/performance-based processes

    Forms part of a range of evaluation practices

    Factors limiting teachers sense of worth of student evaluation data are Use for quality/summative/performance-based

    processes and doubts about its dual purpose Teachers judged on factors outside their control The nature of evaluation instruments Other evaluation methods better/preferred Current student evaluation system limitations Timeliness receiving the results Difficulties in interpretation and how

    to use the data effectively Questionable quality of student responses

    Other limiting factors included preoccupation with research, whether the institution valued teaching and concern for privacy around evaluation feedback.

    11

  • Kerry Shephard

    1000 words to change an institution?

    12

  • Kerry Shephard

    Evaluation of teaching has a history, and, as with all histories, it has had its critical points. The critical point for me occurred in the late 80s, when my higher education institution back in the UK started to prepare for academic audit. I know that we asked our students what they thought of their courses before then, particularly through staff-student committees and course representatives, but it all became so much more formalised when we were obliged to gather together a paper trail closing the loop between design, delivery, student perceptions and redesign, and account for any discrepancies. And I stress that these were course evaluations not individual teaching evaluations. Actually, I did not encounter a student perception form that mentioned my name until the 21st century, at which point it became personal. From then on, if students were not happy it was clearly my fault. Of course, Australia and New Zealand have sought the views of students about their teaching and teachers for many years, and I understand that these instruments are widely used in the USA. Why shouldnt we ask students what they think about the teaching that theyve experienced? Surely teachers are at the centre of their learning experience, so need to be at the centre of our questions about their experience? But are teachers really at the centre of student learning experiences?

    By now you will have gathered that Im not the strongest supporter of the individual teaching evaluation questionnaire. And Ive used more than 250 words already. But you get the point. In my version of higher education, teachers are not at the centre of the student learning experience learners are.

    For some, evaluation is a science, for others an art, and for many a discipline in its own right. There are many ways to do it, and the demands of accountability should not necessarily dictate how we do it, perhaps especially in academia where we might anticipate a high density of clever people able to do it effectively. Perhaps Otagos system is particularly problematic for me. Most of our student

    feedback surveys are of the individual teacher variety, and as these are, at least initially, confidential to the teacher concerned, they have limited roles to play in departmental- and programme-based development. Add potential problems with small samples. Over the past decade, approximately 25% of individual teacher feedback surveys at our university have included responses from 10 or fewer students. Small groups provide low statistical reliability and the ever present danger of students feeling other than anonymous. Some other higher education institutions, in Australasia and the USA, place restrictions on processing survey data with very small student groups and insist that other evaluative approaches, such as peer review, enter the analysis in such cases. Perhaps, because we do not have these restrictions, our own enthusiasm for peer review and other evaluative inputs is itself limited. Our commitment to including, in an unrestricted manner, student perceptions of their teachers within our academic promotion processes may be commendable, but is not without consequences to our collective perception of what counts as evidence of good teaching. I am suggesting that we may have allowed student perceptions of their teachers to over-dominate our evaluative processes.

    Is there another way? Does this escalator have a stop button? Of course, but it is up to us to use it! Senate has now recommended that all papers have course evaluations (Paper Feedback Surveys) at least on a three year cycle. These routinely enter departmental discussions

    Approximately 25% of individual teacher feedback surveys at our university have included responses from 10 or fewer students

  • on what is working, and what is not, and will be (currently are for some) valuable in helping us to understand students learning experiences. Paper Feedback Surveys address student perceptions of their learning experiences rather than of what their teacher did. For many teachers a combination of student perception, peer review and critical self-review will provide rich descriptions of their teaching, suitable for departmental development processes and as evidence of good teaching. Such rich descriptions may be more challenging to interpret than the magic numbers (% 1s and 2s responses to the overall effectiveness question) ever were, but should provide those with this interpretative role with greater personal satisfaction for their efforts.

    But it is still too personal for me. If higher education really is serious about evaluating what it does in the world of teaching and learning, surely it needs to get to grips with bigger questions than whether or not Kerry Shephard, or any other individual, is individually pulling his or her weight. Surely the

    enterprise of higher education is more than the sum of its cogs? Id like to think so, and that we could apply our extensive evaluative efforts towards some bigger-picture questions. The heart of the matter for me is that whereas assessment is used to determine the achievements of individual students, evaluation is good for populations or cohorts of students whose anonymity is assured, and where we either cannot, or do not wish to, apportion blame for lack of achievement. If our students are not collectively achieving what we hope they should, it is not my fault, your fault, or the fault of the student body; it is our problem to solve. It is also our responsibility to ask the questions, or to evaluate the extent to which we achieve what we say we shall. Each department or programme uses assessment to record individual student achievements with respect to agreed objectives related to their field of study. But institutionally we hope that our graduates will be good com-municators, critical thinkers, have cultural understanding and a global perspective, be IT literate, lifelong learners, self-motivated team players, have a sense of personal responsibility in the workplace and community, and, as a bonus since 2011, have some degree of environmental literacy.

    But do our graduates achieve these things? How would we know? Assessment of individuals may not be appropriate, as descriptors such as willingness, appreciation and sense of personal responsibility will always be challenging to assess in an exam. If students degrees counted on it, surely most could write an essay to demonstrate their sense of personal re-sponsibility within the community! But such qualities are open to evaluation using a range of validated instruments. We accept that anonymity encourages our students to give impartial views about our teachers characteristics, so it is not a great leap to use similar approaches to explore their characteristics (always triangulated, of course, with a broad range of indicators, such as, in this case, student portfolios). We could evaluate these things if we wanted to and use our existing evaluation in-frastructure to help us. Indeed, if we spent a fraction of the time, and effort, that we spend asking students what they think of their teachers to address these important attributes we might discover something about ourselves. Collectively, we might be doing a good job. But then again

    14

  • If you want to interpret the results from a student feedback survey, there are two statistical concepts that can be helpful to bear in mind: non-response bias and precision.

    Non-response biasNon-response bias can compromise results if not all the class respond. Those who do not respond might have different opinions from those who do, in which case the results may not be rep-resentative of the views of the whole class. For example, it is possible that a student who does not like the teaching may feel more (or less) motivated to provide feedback than a student who enjoys the teaching.

    PrecisionImagine that you have the results from a single survey with a 100% response rate, so that there is no possibility of non-response bias. There will still be some uncertainty associated with the results, as illustrated by the following example. Suppose 80% of the students respond to the question Overall, how effective have you found X in teaching this course? with the answer Very effective. If you were to repeat the survey next year, you might find that 75% of the students responded in this way. And the year after that it might be 82%. This variation in the results for the same paper and teacher is to be expected, even if you teach it in exactly the same way each year. With survey results from several years, you can clearly assess your teaching in this paper better than with results from just a single year, both overall and in terms of any improvement over time.

    Results from only one survey can be put into context by predicting what the overall response would be over many hypothetical repetitions of the paper. To do this, we view the results from a single survey as a sample from a population of students that might take the paper, in the same way as an opinion poll involves a sample

    David Fletcher

    of the population in New Zealand. This leads to the concept of the precision of the results couched in terms of what statisticians call a 95% confidence interval. In the example above, 80% of students responded with the answer Very effective. If this comes from a survey with 500 respondents, a 95% confidence interval covers the range 76% to 83%, meaning that the percentage of students in this hypothetical population that would respond with this answer is very likely to be in this range. If the survey involved only 10 students, the 95% confidence interval covers a much wider range, from 44% to 93%. The precision of the first survey is much greater, with more certainty about the results. This is analogous to the margin of error often quoted when opinion polls are reported in the media.

    What does this mean in practice?You can clearly reduce non-response bias by encouraging more students to respond. Increasing precision is harder

    if you always teach small classes. It is possible to combine data from several years in order

    to reduce the range of the confidence interval. Even without this extra

    calculation, it is clear that if you have 80% of students responding with the answer Very effective in each of three years, the results are more precise than those from a single year. Whether you use the results for personal development, or others use the results for

    some form of judgment about your teaching, precision and the

    potential for non-response bias need to be borne in mind. In addition,

    I am sure that my colleagues in HEDC will recommend that you triangulate such

    results using other indicators of teaching quality!

    Interpreting the results from a student feedback survey

    15

  • Evaluations: perspectives from end-users

    16

    Research reported in this issue of Akoranga looks at teachers perceptions of evaluations. In response to this I conducted an informal street-poll survey with six Otago University students, seeking their perspectives on the teacher and course evaluations they are asked to complete. Students ranged from first to fourth year of study, with an equal gender representation. The questions, and the students responses, feature below.

    I dont do it 100% seriously, most of the time I just tick all good.

    So, what is the purpose of those evaluation questionnaires that you fill in?

    How seriously do you take the process of filling in the questions?

    It depends how the lecturer is, whether I can understand them, whether I fall asleep or I think the lecturer is really bland. I dont write comments, partly because I cant be bothered, or I feel really bad if I write something bad.

    I always do them seriously, because people in the future will benefit from the suggestions. Also, I know the lecturers in our department value them a lot.

    Seriously this is our chance to speak.

    Reasonably seriously I mean, they put time and effort to make them, so you may as well spend a couple of minutes filling them in.

    I fill it in truthfully.

    To evaluate the lecturers, to see if weve enjoyed the course, if weve learned anything.

    To give our feedback on how we think they teach.

    So that lecturers can alter their courses, get feedback on how students think they are doing, so they can change their teaching style.

    This is a chance for students to evaluate their teachers, gives students a chance to speak.

    So they can figure out how effective they are as a teacher, and keep their job.

    To see how everybody finds the lecturers, for the lecturers to see how theyre doing, a personal evaluation I guess.

  • Evaluations: perspectives from end-users

    17

    On the whole, the students in this small-scale street-poll seem to regard evaluations as a way of providing feedback to teachers, take the process relatively seriously, and think the number of surveys they are asked to complete per semester is appropriate. In addition, it seems these students view the evaluation process as important, not only for themselves but also for the benefit of other students. However the resounding response of never hearing back about results suggests a potentially under-emphasised aspect of the evaluation process that of closing the feedback loop. Taking a more substantial look at students perspectives on evaluations may reveal further insights into how the process can be strengthened, for both students and staff.

    What value do you see in filling in these questionnaires?

    Whats your opinion on how many surveys you are asked to do each semester?

    Its just right.

    Its ok.

    One per lecturer is ok, but we probably could do more about the whole course.

    Just right, not too much.

    Its ok.

    I dont think its too bad.

    It is important, but a lot of students dont take it seriously.

    Its important, but what happens to the results afterwards? Youd like to think that next time the course runs it would be better.

    I use them to write comments, because sometimes its easier doing that than approaching the lecturer.

    Its quite important its communication between the lecturer and the students, its good for the lecturer to see students perceptions.

    It makes the teachers more effective, creates a better learning environment, to improve this uni for everyone else.

    I think its important how else would they know its the main way you give your opinions.

    How often do you hear back about the results?

    Never.

    Ive never heard back.

    Actually never.

  • HEDC Evaluation Research and Development exists to support academic staff professional development. The team consists of Jo Kennedy, Allen Goodchild and Julie Samson, with Dr Sarah Stein as the academic coordinator of the section. We are mostly known for administering the University of Otago centralised student evaluation system as a free service for staff. These questionnaires give students a voice to feed back their experience of papers/modules/courses and teaching, and they enable staff to utilise student perceptions when developing their papers and teaching.

    Evaluation without development is punitive, but development without evaluation is guesswork(Theall, 2010, p. 45)

    What do we do?

    In the course of each year we create, log, scan and report on 2500 evaluations. Even though the proportion of online surveys has increased from 2% in 2009 to 9% this year, this still involves handling and counting over 100,000 forms.

    Due to system enhancements, our average turnaround time for generating reports after receiving completed questionnaires has reduced from 6.3 weeks in 2009 to the current 2.8 weeks. It takes us longer, however, to return reports over the last 5-6 weeks of lectures in each semester, when the three of us are dealing with over 600 evaluation requests. During these peak times, when our frazzle-levels are correspondingly high, patience and chocolate are appreciated! Its always worth asking if you need something done urgently; we say yes much more often than no. We have our regulars in this category (you know who you are).

    No, we dont run popularity polls!Jo Kennedy

    18

  • No, we dont run popularity polls!Jo Kennedy

    19

    Student Evaluation Design

    We work with you to design a suitable evaluation. Your job is to tell us what type of evaluation you would like, which questions to include and the questionnaire medium (paper or online). Our job then is to set up the questionnaire, and, once it has been completed by students, process the results. In addition, we provide advice about questionnaire design, interpretation of results, other evaluation methods, and how the evaluations can be used within institutional processes (e.g. your Otago Teaching Profile, promotion and confirmation).

    There are three types of evaluation that we offer, each with a slightly different focus

    We encourage staff to not only be self-reflective and gather evaluation data that helps them improve their teaching/courses, but also to realise the importance of having that data in a form that can be used as evidence for all the types of quality advancement processes that exist at Otago University and externally. Examples of processes that this evaluation data can be useful for include performance appraisal (promotion, confirmation), job applications, departmental review and external accreditation. Improvement and development is the first step, and this allows judgments to be made about quality.

    Research, Projects and Policy

    Good evaluation practice starts at home so we support a range of HEDC evaluation activities.

    We conduct evaluation-related research. Just completed is an Ako Aotearoa funded project on Teachers Perceptions of Student Evaluation. The summary and full reports are available at: http://akoaotearoa.ac.nz/student-evaluations.

    Course Questionnaire Individual Teacher Questionnaire

    Coordinator/Team Leader Questionnaire

    Primary purpose

    To assist development and planning of a course/paper/module.

    To assist individual teacher professional development in relation to student learning.

    To assist individual teacher professional development in relation tocoordination/team leadership activities.

    Respondents Students. Students. Teaching team/ Tutors/ Demonstrators.

    Structure of Questionnaire

    No compulsory questions, choose from catalogue or customised questions.Options of 5-rating Likert scale or comments questions.

    5 compulsory questions and 5 additional from a catalogue. One fixed comment question.

    Choice of 5 to 10 questions from a catalogue.Options of 5-rating Likert scale or comments questions.

    Can be used in performance appraisal processes?

    Yes. Yes. Yes.

  • 20

    Over the last year we have been working on a major system improvement project, stage one being the Otago InForm online ordering system. This is an on-going project, and we welcome staff feedback about its usability and features. The initial development phase of this project utilised the feedback we received from our Evaluation Service survey conducted in 2010.

    We are involved with policy development and are part of an Evaluation Working Group currently considering possible changes to the student evaluation process.

    Myth-Busting!

    As the title suggests, a number of myths still persist about student evaluations. This is despite substantial research on validity and reliability of student ratings. If any of the phrases below sound familiar, you might like to read a paper by Benton and Cashin (2012) in which they summarise the research and literature that shows these are myths.

    Students cannot make consistent judgments. Student ratings are just popularity contests. Student ratings are unreliable and invalid. The time of day the course is offered affects ratings. Students will not appreciate good teaching until they are out of college a few years. Students just want easy courses. Student feedback cannot be used to help improve instruction. Emphasis on student ratings has led to grade inflation. (Benton and Cashin, 2012, p. 2)

    Despite the lingering myths, many staff value collecting student data on their courses and teaching. As one University of Otago academic staff member commented in the Ako Aotearoa research project mentioned above, evaluation keeps you on your toes - review by others is a great way to identify how others see you... and how they see your strengths and weaknesses, while another comment provides a constructive thought on which to conclude: Student opinion is essential to understanding what the students feel and what they like and dont like or understand - they also offer ways to improve.

    References:Benton, S.L., & Cashin, W.E. (2012). Student Ratings of Teaching: A Summary of Research and Literature. IDEA Paper No. 50. Manhattan, KS: Kansas State University, Center for Faculty Evaluation and Development.Theall, M. (2010). New resistance to student evaluations of teaching. The Journal of Faculty Development, 24(3), 44-46.

  • Since 1995, the University of Otago has conducted an annual Graduate Opinion Survey (GOS) and Student Opinion Survey (SOS) to obtain feedback of students overall academic experience. These surveys are managed by the Quality Advancement Unit (QAU), which also coordinates the Universitys response to the external academic audit conducted by the New Zealand Universities Academic Audit Unit, administers the Universitys internal academic and administrative reviews process, and supports initiatives that facilitate good practice in quality assurance and improvement across the University.

    The GOS surveys graduates who completed their course of study two years previously, while the SOS surveys current students. Each degree/major combination is surveyed approximately once every four years and the GOS and SOS timetables are aligned.

    The core instrument of both surveys is the Course Experience Questionnaire (CEQ), which is widely used in Australia. The CEQ is directed at students and graduates who have/had a course work component as part of their study, and asks questions grouped into a number of themed scales in order to measure student assessment of the following: Good Teaching Clear Goals

    and Standards Learning Community

    (SOS only) Intellectual Motivation

    (GOS only) Appropriate Assessment Generic Competencies Overall Satisfaction

    The statements in the CEQ are based on comments that students and graduates often make about their experiences of university teaching and study, and that research has shown to be indicative of better learning. The emphasis of this questionnaire is on students perceptions of their entire course of study. The results are the averages of students experiences.

    In 2011 and 2012, the University added a new post-graduate section to its graduate survey and student survey respectively, based on the Postgraduate Taught Experience Survey and the Postgraduate Research Experience Survey (developed by the UK-based Higher Education Academy). The aim of this modification is to enable benchmarking with comparable UK institutions, and to provide more in-depth information about the postgraduate experience of Otago students.

    Romain Mirosa

    In 2013, the graduate attributes section in the GOS will be modified to align with the new Otago Graduate Profile adopted by the University in April 2011.

    Like HEDCs evaluation questionnaires, the SOS and GOS provide data to support improvement in teaching and learning outcomes. The results from the surveys provide a unique possibility to gain an overview of the teaching and learning outcomes (through

    the particular lens of students and graduates perceptions) at institutional, divisional

    and departmental level. They are also used as a source of information for

    review panels to demonstrate to government, through the Annual

    Report, that the University has achieved the goals it has set itself, and to external agencies, such as the 2011 Academic Audit Panel, that the University has processes in place to seek feedback and use

    data to facilitate improvement in its core activities and operations.

    While the GOS and SOS surveys continue to be core, the University has

    taken part in three iterations (in 2009, 2010 and 2012) of a relatively new student experience

    questionnaire called the Australasian Survey of Student Engagement (AUSSE). This questionnaire focuses on measuring the level of engagement of students and their learning outcomes. While response rates have been very low, the AUSSE surveys have yielded some interesting results, and, as it is a standardized questionnaire used by many institutions in Australasia, it provides the opportunity for benchmarking.

    Recently the Australian Government has com-missioned a new survey, focusing on the student experience, called the University Experience Survey (UES) that will be rolled out in Australia in 2013. The UES appears to be heavily influenced by the engagement literature, while still incorporating more traditional student experience questions. The QAU will monitor these developments and evaluate their possible relevance and value for the University of Otago.

    QAU learning and teaching

    21

  • Beyond centralised student evaluation Swee-Kin Loke

    22

    Whether you had planned for formal student evaluation or not, your students might already be evaluating your course or teaching via other means (sometimes publicly!). Akoranga scanned a popular teacher rating website and also student votes for the OUSA Teaching Awards for the most complimentary student views on Otago teachers.

    Th

    e way

    [he]

    taugh

    t the

    subject

    I contin

    ually want

    ed to go and make the world a better place. [He] will be a lecturer I will remember my whole life.

    R

    emem

    bers

    everyo

    nes n

    ames an

    d is very

    funny, fair, enga

    ging, down to earth and every lecture is useful and worth your time.

    S

    he ma

    kes m

    e want

    to com

    e to clas

    s (which is a

    change)! Her enthusiasm is infectious.

    T

    his gu

    y is jus

    t fant

    astic: h

    e is smar

    t, funny, very complex but in a good way.

    [

    He] is

    clear

    as to w

    hat he e

    xpects of you, n

    ot easy but fair.

    So h

    elpful

    and

    humorou

    s, I never m

    iss his lectures. LEGEND.

    Sh

    e is inc

    redibly

    knowledg

    eable and witty.

    I ac

    tually l

    isten the

    whole time!

    The way in which she answers our questions is by challenging our own thoughts so

    we learn ho

    w to crit

    ically a

    nal

    yse ex

    perim

    ents

    in th

    e fu

    ture

    . Has a lot of patience with those that are struggling and makes all areas o

    f the pape

    r intere

    sting.

    A scholar and a gentleman, he has imbued me with a deep respect

    for the C

    lassics.

    She always made everything clear and ensured we all understood

    .

    Shes the most helpful and cheery professor Ive ever had.

    All round good bloke, and interactive with the c

    lass.

    G r e a t t e a c h e r , g r e a t p e r s o n .

  • Swee-Kin Loke

    ISSUE 08 2012: Evaluation

    AKORANGAThe Periodical about learning and teaching

    FOLD

    Jo KennedyHigher Education Development Centre65/75 Union Place WestDunedin 9016

    To:

    You have in your hands the eighth issue of Akoranga, a collaborative periodical from HEDC and colleagues around the University with an interest in higher education teaching. To remain relevant to readers, the editorial team felt it was timely to seek your views more formally. We promise that this wont take more than a walk to the internal mail postbox.

    Alternatively you can fill it on line here:

    http://tiny.cc/kampmw(The survey will be live until Christmas)

    Do you teach at the University of Otago? YES NO

    Q1.

    Approximately what proportion of this eighth issue did you actually read?

    0% 25% 50% 75% 100%

    Q2.

    We have listed below the 3 main goals of Akoranga. How effective is Akoranga in

    (a) Promoting good teaching within the university

    Very much Not at all

    (b) Generating debate around issues in higher education teaching

    Very much Not at all

    (c) Communicating what HEDC values in terms of teaching and learning

    Very much Not at all

    Q3.

    How can we improve Akoranga?