proceedings of 36th international conference - improving university teaching (germany)

243
36 th Annual International Conference Improving University Teaching “The Collaborative Classroom” Conference Proceedings 2011 Bielefeld, Germany July 19 th 22 nd

Upload: sreejith-aravindakshan

Post on 29-Nov-2014

267 views

Category:

Documents


5 download

DESCRIPTION

Proceedings of 36th International Conference 2011- Improving University Teaching held at Bielefeld University (Germany)

TRANSCRIPT

36th Annual International Conference

Improving University Teaching The Collaborative Classroom

Conference Proceedings 2011

Bielefeld, Germany July 19th -22nd

Conference Director James Wilkinson, Harvard University, USA Conference Hosts: Andrea Frank, Bielefeld University, Germany Janina Lenger, Bielefeld University, Germany IUT Advisory Board Dirk Bissbort, University of Oulu, Finland Anna Kwan, The Open University of Hong Kong, Hong Kong Ray Land, University of Strathclyde, United Kingdom Kethamonie Naidoo, University of Limpopo, South Africa Robert Pithers, University of Technology-Sydney, Australia Peter Seldin, Pace University, USA Todd Zakrajsek, University of North Carolina, Chapel Hill, USA John Zubizarreta, Columbia College, USA Advisor Emeritus Wilbert J. McKeachie, University of Michigan, USA Program Coordinator Deb Van Etten, International Teaching Learning Cooperative, USA

Opening Plenary International Perspectives on Building Collaborative Frameworks Ray Land (UK), Kethamonie Naidoo (South Africa), Robert Pithers (Australia), Anna Kwan (Hong Kong) and Todd Zakrajsek (USA)

A panel of experts from institutions around the globe will discuss how collaborations that promote student learning is facilitated in their classrooms. The conversation will provide both a framework and concrete methods that can be incorporated in your classrooms. Time will be allocated to respond to challenges faced by delegates at the conference.

Invited Plenary Host Institution Katharina Kohse-Hinghaus Bielefeld University, Germany

Teaching Science

Addressing global challenges in areas such as climate, energy and health needs young people with a vision and rock-solid science knowledge. Science is, however, not everyone's favorite subject, and it is often regarded as difficult. How can we prepare students to choose science disciplines for their studies? Can we keep them enticed to learn science during their university education? How can we enable them to find their ways into their future profession as scientists? The talk aims will offer some thoughts and examples. Professor Dr Katharina Kohse-Hinghaus is senator of the Helmholtz Association for the Research Field "Key Technologies" and has been university professor for physical chemistry at

Bielefeld University since 1994. She studied chemistry at the Ruhr-Universitt in Bochum where she attained her doctorate in 1978. After terms at the German Aerospace Center (DLR) in the Helmholtz Association and at the Department of Mechanical Engineering at Stanford University, as well as with the Molecular Physics Laboratory at SRI International, USA, she obtained her habilitation in 1992. In the following year she was granted a Heisenberg Fellowship and the annual "Baetjer Lectures" of the School of Mechanical Engineering and Applied Science of Princeton University, also completing research stays at ONERA in Paris. From 2001 to 2003 she was vice-rector for research and young scientists at Bielefeld University. In 2007 she received the Cross of the Order of Merit of the Federal Republic of Germany from Federal President Horst Khler honouring her commitment to the "teutolab" project and, in 2008 she was awarded an Honorary Guest Professorship of the University of Science and Technology of China. Professor Kohse-Hinghaus holds positions in various science organisations for example, she is President-Elect of the International Combustion Institute and a member of the senate and of the Joint Committee of the German Research Foundation (DFG). She is also a member of the University Council of Bielefeld University and the German Academy of Sciences Leopoldina National Academy of Sciences (since May 2008).

Closing Plenary John Zubizarreta Columbia College, USA

International Perspectives on Teaching Excellence: Reflections of a Carnegie/CASE U.S. Professor of the Year One of our perennial questions as educators around the world is the thorny issue of what defines creative and inspiring excellence in teaching and learning. We invest much of our work in the notion that innovative, dynamic pedagogies help to facilitate rich, transformative learning. Certainly, the U.S. Professor of the Year program, sponsored by the Carnegie Foundation and the Council for the Advancement and Support of Education, celebrates the kind of excellent teaching that results in deep and lasting learning. Other nations, too, honor their best teachers with various awardsall of them predicated on recognition of excellence. What can we agree on as the basic principles of excellence in our work as teachers and learners? This plenary workshop presentation by a recently named Carnegie/CASE award recipient is a hands-on, interactive opportunity to explore the theoretical and practical benefits of the kind of teaching that we applaud for producing significant learning.

John Zubizarreta is Professor of English, Director of Honors and Faculty Development, and former Dean of Undergraduate Studies at Columbia College. He has published widely on modern American, British, and comparative literatures; teaching pedagogy; honors education; teaching,

learning, and administrative portfolios; academic leadership; and faculty development. A Carnegie Foundation/C.A.S.E. Professor for South Carolina, he has also earned recognition for teaching and scholarly excellence from the American Association for Higher Education, the South Atlantic Association of Departments of English, the National United Methodist Board of Higher Education, the South Carolina Commission on Higher Education, and other educational organizations. John has led faculty development workshops and delivered keynote addresses worldwide, and he has mentored faculty nationwide and abroad in enhancing and documenting teaching and learning. His most recent interests have turned to student learning portfolios designed to improve learning through reflection, collaboration, and evidence.

36th Annual International Conference

Improving University Teaching The Collaborative Classroom

Pedagogical Research Bielefeld, Germany July 19th -22nd

Works of Pedagogical Research Cochrane, M. & Woolhouse, C., We Thought thats What youWanted! Using Student Focus Groups to Collaborate in Course Development. (Paper Panel July 22, 2011). Bissbort, D., Jrvel, S., & Nenniger, P., Advancing Conditions for Self and Shared Regulation in Collaborative Learning Settings in Higher Education. (Paper Panel July 20, 2011). Farley-Lucas, B. & Sargent, M., Enhancing Faculty-Student collaborations through Out-of-Class Communication. (Paper Panel July 20, 2011. Wafula, J., Embracing Collaborative Testing for Formative Assesment at Universities. (Paper Panel July 20, 2011). Pithers, R. & Holland, T., Measuring and Building Classroom Communities. (Paper Panel July 22, 2011). Remmer-Nossek, B., Engaging University Teachers in a Reflections of the Study Programme. (Paper Panel July 22, 2011). Gerhardt, B. & Borst, A., The Stuttgart Model- Phase- Overlapping Modules. (Presented July 22, 2011). Bar-Yishay, Hanna., International Collaborative Teaching Using Interactive Distance Learning- A Case Study. (Paper Panel July 20, 2011).

Title: WE THOUGHT THATS WHAT YOU WANTED!: USING STUDENT FOCUS GROUPS TO COLLABORATE IN COURSE DEVELOPMENT Theme: Collaboration and Active Learning * Matt Cochrane Faculty of Education Edge Hill University St Helens Road Ormskirk, L39 4 QP United Kingdom Dr. Clare Woolhouse Faculty of Education Edge Hill University St Helens Road Ormskirk, L39 4 QP United Kingdom * Author who will be the primary contact

ABSTRACT

Course evaluation frequently takes the form of a questionnaire completed towards the end of a module. High student satisfaction ratings can sometimes disguise difficulties experienced by the students during the course. This paper describes how focus group interviews where held with some students while their course was still in progress. This allowed the researchers to get a clearer insight into student expectations and how they can be driven by assumptions the students make about the course from the pre-course information.

Summary A comparative study of the conclusions drawn from questionnaire data and from focus group interviews. Analysis of questionnaire data can appear to be over-simplistic and can mask important detail.

Introduction Some students wishing to become science or mathematics teachers in English secondary schools (for ages 11-18) lack sufficiently strong subject knowledge to qualify for the training programmes. Many English universities provide a range of Subject Knowledge Enhancement (SKE) courses to meet the gap between the knowledge they gained in their degree and the knowledge required for teaching. The authors are carrying out a longitudinal research project

seeking to investigate the impact of such a course on the students careers. Does attendance on this course provide them first of all with the appropriate knowledge-based skills to train as a teacher, and secondly with sufficient confidence to enter the teaching profession? And do prospective employers, colleagues and pupils view differently teachers who have entered the profession by this route? In other words, are the students welcomed into the community of science and mathematics teachers, and/or do they feel a part of that community? The research team decided to use focus group interviews to discuss these issues with the students. Questionnaire responses carried out as a routine evaluation of the course suggested

that satisfaction with the course was high, and that all students felt that subject knowledge had progressed significantly. However, analysis of the interviews suggested a degree of mismatch between student expectation and course design. Because of the interviews, the course team were able to make adjustments to the course design which would otherwise not have taken place. This paper discusses the use of focus groups as a methodology for student collaboration with course teams in the evaluation and design of teaching programmes. Literature review In England there is a significant shortage of secondary teachers (covering ages 11-18) in certain subjects particularly mathematics, physics and chemistry. The UK Government funds a

Subject Knowledge Enhancement (SKE) course in these subjects to enable graduates with degrees in other (but related) subjects to bring their subject knowledge up to the standard required for an Initial Teacher Training (ITT) course. This kind of support is essential (Lucas and Robinson 2003) to support entry into the teaching profession for individuals who would otherwise not be able to qualify. The authors are carrying out a research project to determine the extent to which the SKE course prepares the trainees for their engagement with ITT, and for their subsequent entry into the teaching profession. The project involves researching the views of the participants themselves through questionnaires and interviews, and also the views of pupils and teachers at the schools attended by these trainees both during their professional practice while on their ITT course, and finally when they have entered the profession. We wanted to know if the participants felt that the SKE course helped them to develop their subject knowledge so that they felt adequately prepared when compared to other trainees on their ITT course, adequately prepared to face pupils in the classroom, and adequately prepared to enter the competitive jobs market. Our first conclusions from discussions with the trainees (Woolhouse and Cochrane, 2009) were centred on the trainees ability to join the community of practice (see for example Lave and Wenger, 1991:98) of mathematics, physics and chemistry teachers their postgraduate programme would prepare them for. Further (forthcoming) papers will report on the success of this cohort in entering the profession, and on the quality of their teaching. While analysing the interviews of the trainees, it became evident that while much of the conversation covered the participants improving confidence as mathematicians (Burton, 2004) some of the information from the trainees was useful in evaluating the quality of the course (see for example Rowls and Swick 2000), and provided far richer information than the more usual

methodology we had employed for this purpose namely a questionnaire completed towards the end of the course. The questionnaires and interviews conducted for this research took place both during the SKE course and also in the year after the course, once the trainees had had some time to put their new level of subject knowledge to some use. Implicit in the title to this paper is the conclusion that the trainees in the interviews gave far more revealing information face to face and in groups than they had through the anonymous questionnaires. Where the questionnaires tended to ask questions such as are you satisfied with the course? Has your subject knowledge developed as much as you hoped? Has the course met with your expectations? the responses indicated a high degree of satisfaction. Using a fivepoint Likert scale, it is possible to gain a measure of student satisfaction through the average response to each question. Methodology In this research we both conducted interviews and analysed questionnaires, and for this paper we consider a comparison of responses to similar questions drawn from the two media. It emerges that over some issues where student satisfaction appeared to be high, there were nevertheless some points they were keen to raise when faced with the same questions in person and in a group. This paper discusses the outcomes of six of the questions from the questionnaire delivered to 20 mathematics trainees at the end of the SKE course. A focus group interview was undertaken with five of these trainees while they were in the university taking their ITT course. Focus groups have become prevalent in Social Science research only in the last 20 years or so after being developed for use chiefly in market research (Kitzinger 1995). The significant feature of focus group research is the interaction between participants a focus group session enables participants to pick ideas off one another and to develop their opinions.

Onwuegbuzie et al. (2009) also stress the need to consider interactivity in focus group analysis and are critical of much research which purports to conduct focus group interviews but often presents data as a series of individual statements. In this paper therefore, we present some findings from analysis of the focus groups which demonstrate conclusions which can be drawn after some interactivity between the participants. An additional quantitative analysis of the responses in the interview was carried in an attempt to measure the degree of interactivity taking place: a case of presenting regularities in numerical form (Onwuegbuzie et al. 2009: 14).

Findings The results of the six questions extracted from the questionnaire are presented in the appendix. It can be seen that satisfaction is generally very high: at least 75% feel that the course is good or better, and virtually all felt that their knowledge had improved. (One of the

respondents scored their knowledge as very good before the course and adequate afterwards. We were able to determine that this was a result of the respondents improved understanding of the subject knowledge requirements. This raises possible questions about the validity of

comparisons between these two questions.) We then conducted a very simple quantitative conversation analysis on the overall conversation. Throughout the recorded session, the

interviewer offers questions and prompts to develop the contributions of the participants. In fact there are a total of 94 prompts and 211 responses throughout the session. For the first half of the session, the participants are expressing largely positive statements about their progress through the course, and their expectations of the teaching profession. In this period there are 70 prompts and 108 responses (only 1.5 responses for each prompt) indicating the strong individual nature of the responses, and the fact that the interviewer found it necessary to intervene often. At this point, the interviewer invites the group to offer criticisms of the course:

Ok and then the final question is how do you think this course could be improved? Hit us with it. In the remainder of the interview, there are only 24 further prompts and 103 responses (4.3 responses per prompt). The participants are clearly more animated about the prospects of improving aspects of the course, and significantly, are keen to share their opinions with each other. In particular there are four lengthy interchanges of opinion providing information which either challenges the questionnaire data or adds to it: Workload

The participants discuss the varying workload in the course, and express a desire to have more information about it, and more control over course content. They are particularly concerned with the presence of advanced topics in the course when they are expecting to teach at a more basic level. This challenges the notion that 75% feel that the course is meeting individual needs. Pacing

In a similar vein, some felt that parts of the course went too quickly for them: So like you could be on your stats [statistics] one which you can pretty much fly through the first half of it whereas the decision one [decision mathematics] took me a lot longer so it is hard trying to figure out where you should be up to and how much time you should be leaving for everything. Yes pacing it is very difficult because it goes up the difficulty almost exponentially as you are going through the things so if you say I will do two chapters for instance a week, you do the first two chapters then you are fine and you are suddenly running out of time

Again are they saying that their individual needs are not being met? Support

Participants are happy with support, but negotiate their way towards preferring a regular face-to-face tutorial session, while acknowledging that this cannot work for everyone (some of the trainees live a considerable distance from the university) .and I needed that yes, yes. I agree and I love the thought that you could sort of come over if you were having difficulty because you were just sitting and thought my head is going to explode with the knowledge you could come in and ask people Assessment but no generally I would say once a week would be good, again different people, different circumstances. We were given the option if you had of wanted or needed to be here every week. Yes tutorial week It just favors those who live fairly locally.

Assessment The assessment pattern for the course is unusual, and as with an earlier discussion

(Cochrane and Woolhouse 2009), uses personal reflection to build understanding of the individuals own needs so that they can better address them personally. The group initially discusses difficult with the concept of starting the course without having had a formal assessment to identify needs, but finally the group concludes that I think one of the things the course maybe has given me is the confidence that if I am going to be asked to teach a subject I am not too well aware of or not up on

that I know now I can go back into learning myself and pull that knowledge [together]. Again this is one of the criteria that we are trying to achieve self study, because when you are a Teacher nobody is going to sit you at a desk and tell you how to do things it is just over to you and you are going to have to learn it, if you dont know a topic you are going to have to learn it. There was a slight inconsistency in the responses to questions 5 and 6 the trainees were asked to judge their subject knowledge both before and after the course without any clear guidance on how to judge this. One of the aims of the course is that trainees should develop their ability to assess accurately their own subject knowledge needs. The responses above suggest that by the end of the course this has been achieved, so their opinion of their pre-course knowledge may have been overestimated. Conclusions The research has illustrated two themes which are of interest: first, the analysis of focus group data is more complex than simply extracting information from the participants, and a study of the interactivity of the data should be conducted. Interactivity is an integral part of focus group analysis (Kitzinger 1995, Onwuegbuzie et al. 2009), and so the parts of the discussion used forthis analysis were those parts which demonstrated a degree of interaction between the participants and not just between the interviewer and an individual. Significantly, the critical evaluation of the course took place largely in those periods of the discussion.

The second theme is the discussion of the meaning of responses to evaluative questions: it is important to note that we are not seeking to contradict or criticise the data drawn from evaluative questionnaires, simply that by collaborating more interactively with students we can

gain a clearer understanding of the reasons behind some of the responses they give to questionnaires. The wording of questions can be critically misunderstood, and when the

questionnaire is analysed false data results. The conversations with the trainees shows that they were developing a clearer understanding of the purpose of the course and of their own development through engaging with the course. They have made some constructive and

thoughtful contributions which have since been incorporated into later versions of the course. In particular, the frequency of face-to-face sessions has increased, and the pre-course assessment has been changed so that students are able to start the course with a better understanding of their route through it, and therefore a better understanding of how to organise their workload.

ReferencesBurton, L. (2004) Confidence is everything perspectives of teachers and students on learning mathematics. Journal of Mathematics Teacher Education, 7: 357-381.

Cochrane, M. and Woolhouse, C. (2009) Turning teachers into learners: evaluating the impact of a professional development program, Navigating innovations in teaching and learning, Improving University Teaching, Simon Fraser University, Vancouver, 14th-17th July

Kitzinger, J. (1995) The methodology of Focus Groups: the importance of interaction between research participants, So covering topics related to the questionnaires: Sociology of Health and Illness, 16 (1): 103-212.

Lave, J. & Wenger, E. (1991) Situated learning: Legitimate peripheral participation, Cambridge, Cambridge University Press.

Lucas, T. & Robinson, J. (2003) Reaching them early, identifying and supporting prospective teachers. Journal of Education for Teaching, 29 (2): 159-175.

Onwuegbuzie, A., Dickingson, W., Leech, N. and Zoran, A. (2009) A Qualitative Framework for Collecting and Analyzing Data in Focus Group Research, International Journal of Qualitative Methods, 8 (3): 1-21

Rowls, M & Swick, K.J. (2000) The voices of pre-service teachers on the meaning and value of their service learning. Education, 120 (3): 463-475

Woolhouse, C., Cochrane, M. (2009) Is subject knowledge the be all and end all? Investigating professional development for science teachers, Improving Schools, 12(2): 166-176

Appendix Questionnaire dataPlease indicate the extent to which you agree/disagree with the following statements (1=strongly disagree, 2=disagree, 3=not sure, 4=agree, 5=strongly agree -- Please indicate one) 1. The course addresses my individual training needs 1: 2: 3: 4: 5: 10.0% 10.0% 5.0% 35.0% 40.0% 2 2 1 7 8

2. I am making significant gains in subject knowledge during the course 1: 2: 3: 4: 5: 5.0% 0.0% 5.0% 25.0% 65.0% 1 0 1 5 13

3. The course is enabling me to identify my areas of strength and areas of development 1: 2: 3: 4: 5: 5.0% 10.0% 0.0% 35.0% 50.0% 1 2 0 7 10

4. I would recommend this type of course to other trainees 1: 2: 3: 4: 5: 5.0% 0.0% 15.0% 15.0% 65.0% 1 0 3 3 13

In relation to the topic/area of knowledge most important to you assess your development by indicating one number. (1 = no knowledge, 2 = very poor knowledge, 3 = poor knowledge, 4 = adequate knowledge, 5 = good knowledge). 5. What is your appraisal of your knowledge of the topic/area of knowledge prior to starting the SKE course? 1: 2: 3: 4: 5: 5.6% 22.2% 38.9% 27.8% 5.6% 1 4 7 5 1

6. What is your appraisal of your knowledge of the topic/area of knowledge now? 1: 2: 3: 4: 5: 0.0% 0.0% 0.0% 27.8% 72.2% 0 0 0 5 13

Title: ADVANCING CONDITIONS FOR SELF AND SHARED REGULATION IN COLLABORATIVE LEARNING SETTINGS IN HIGHER EDUCATION Theme: Collaboration and Active Learning or Enhancing Collaboration through Instructional Technology

Authors: * Dirk Bissbort Department of Educational Sciences and Teacher Education Learning and Educational Technology Research Unit (LET) P.O.BOX 2000 90014 University of Oulu Finland Prof. Dr. Sanna Jrvel Department of Educational Sciences and Teacher Education Learning and Educational Technology Research Unit (LET) P.O.BOX 2000 90014 University of Oulu Finland Dr. Hanna Jrvenoja Department of Educational Sciences and Teacher Education Learning and Educational Technology Research Unit (LET) P.O.BOX 2000 90014 University of Oulu Finland Prof. Dr. Peter Nenniger Institute for Educational Science University of Koblenz-Landau Buergerstr. 23 76829 Landau Germany

* Author who will be the primary contact

Abstract Self- and co-regulated learning is widely seen as a most effective combination of skills in collaborative and work- place learning contexts to face the challenges of todays learning society. Higher Education aims at advancing autonomous academics who are able to use their knowledge and skills for taking high responsibilities in their contexts of activity. Earlier research of different traditions in SRL aims either a) at a deeper understanding of active motivation and emotion regulation in shared learning situations considering complex interactions in changing learning contexts or aims b) at developing more adequate explanation models of the influence of study conditions on individuals approach to self-direction and its regulation in academic learning. However, improving university teaching requires knowledge from both research traditions. In consequence, this paper suggests an integration of multiple theoretical perspectives to focus on the reciprocal relations between individual and collaborative learning including the respective forms of self-direction and regulation. Our own empirical results from two different studies (1) enable a deeper understanding of how motivation and emotion is regulated in collaborative learning using the instructional technology, and (2) provide empirical evidence of how different study conditions in higher education impact the dynamic forces of self-direction and its regulation in learning. The discussion will reflect both, the microlevel challenges of collaborative learning when regulating motivation, and, macrolevel issues of the modified study conditions due to the Bologna reform processes in Europe. Finally, as a consequence, a combination of both research methodology approaches is suggested that may lead to powerful tools for each predicting effect of study conditions on individual and collaborative learning.

Enhancing faculty-student collaborations through out-of-class communication Improving University Teaching, July 2011 Bielefeld, Germany Conference subthemes: Bonnie S. Farley-Lucas Margaret M. Sargent Southern Connecticut State University

2

Enhancing faculty-student collaborations through out-of-class communication

Abstract Using a faculty-student collaborative research approach, this study explores faculty- student out-of-class communication. Out-of-class communication is linked to student learning, engagement, and success and is the wellspring for mentoring, advising, supplemental instruction, and collaborative inquiry. Student-researchers conducted depth interviews with a diverse group of 33 undergraduates regarding behaviors, statements, and practices that contributed to out-of-class-communication. Narrative analysis attends explicitly to students interactions with faculty in and out of class and students perspectives on instructional practices. Suggested strategies for effective out-of-class communication are presented, as well as key issues experienced while collaborating with undergraduate student-researchers. (93 words) (Key words: out-of-class communication; faculty-student interaction, student perspectives) 2 Sentence Summary A faculty-student collaborative research project, involving depth interviews with a diverse group of 33 undergraduates, explored the specific behaviors, statements, and practices that contributed to out-of-class-communication, as well as the instructional practices that those students reported as facilitating faculty-student communication. Suggested strategies for effective out-of-class communication are presented, as well as key issues experienced while collaborating with undergraduate student-researchers.

3 Enhancing faculty-student collaborations through out-of-class communication

Faculty-student collaborations are facilitated by out-of-class communication. Out-of-class communication includes mentoring, academic advising, supplemental instruction, faculty involvement in student organizations, and student-faculty discussions about non-class related issues (Nadler & Nadler, 2001). This paper summarizes a recent study on out-of-class communication that examined, in part, personal characteristics and behaviors students experienced as contributing to out-of-class communication and instructional practices students found to be helpful in supporting out-of-class communication (Farley-Lucas & Sargent, In Press). Process-related issues concerning the faculty-student research collaboration employed in the study will also be addressed. Related Literature Student-faculty communication is central to teaching and learning. Students rank studentfaculty interaction as a high priority (Astin, 1993). They want to connect with professors and often cite the valued relational qualities of equality, mutuality, and respect (Garko, Kough, Pignata, Kimmel, & Eison, 1994). One of the two environmental factors most predictive of positive change in college students academic development, personal development and satisfaction, and one of the five benchmarks of student engagement, is interaction between faculty and students (Astin, 1993; Kuh, Kinzie, Schuh, & Whitt, 1995). Expressing care, building rapport, and creating positive learning climates all contribute to positive faculty-student interaction, and thus to student motivation and learning (Ambrose, Bridges, DiPietro, Lovett & Norman, 2010; Chickering & Gamson, 1987; Dobransky & Frymeir, 2004; Fusani, 1994; Meyers, 2009; Richmond, Gorham, & McCrosky, 1987). Since faculty-student interaction

4

promotes student motivation and success, professors are coached to increase contact, maximize office hours, and talk with students (McKeachie & Svinicki, 2006). Despite its many benefits, face-to-face out-of-class communication is infrequent and electronic consultations via e-mail have largely replaced traditional office hours (Duran, Kelly, & Keaten, 2005). Regardless of context, students, unfortunately do not always encounter positive faculty behavior. Teacher misbehaviors are defined as those behaviors that interfere with instruction, and thus, learning (Kearney, Plax, Hays, & Ivey, 1991, p. 310). In class, faculty misbehaviors negatively impact both students and faculty. Students report less learning, less engagement, and less enactment of recommended classroom behaviors when teachers misbehave (Dolin, 1995). Teacher misbehaviors are linked to student resistance (Kearney, Plax, & Burroughs, 1991), teachers lack of credibility (Banfield, Richmond, & McCroskey, 2006) and negative teaching evaluations (Schrodt, 2003). Assertiveness, responsiveness, student liking for the teacher and affect toward the material are all negatively associated with teacher misbehavior (Banfield, et al, 2006; McPherson, Kearney, & Plax, 2003, 2006; Myers, 2002; Wanzer & McCroskey, 1998). Not surprisingly, students can also encounter teacher misbehaviors out-of-class, including: inaccessibility to students, missing scheduled appointments, not showing up for office hours, and/or not making time for students when they need additional help (Kearney et al, 1991). The consequences can be quite negative. Students can experience barriers to learning, public embarrassment, harassment, frustration, and the violation of expectations for faculty professionalism, all contributing to impoverished learning (Farley-Lucas & Sargent, 2007). Clearly, faculty wishing to engage students in out-of-class communication or collaborative projects must be mindful to avoid teacher misbehaviors both in the classroom and out of the classroom.

5 With an explicit focus on specific behaviors, interactions, and verbal statements that

students defined as encouraging out-of class communication, we can make clearer connections to pedagogical practices that contribute to learning as well as to practices that contribute to collaborative student-faculty relationships. Specifying behaviors also allows for an exploration of the nature, development, and consequences of particular classroom dynamics. Two key issues will be addressed here. RQ1: What personal characteristics and faculty behaviors have students experienced as encouraging out-of-class communication? RQ2: What specific instructional strategies did students report as effective in encouraging them to engage in out-of-class communication with professors? Method To enhance student research experience and to allow for more candid interviews, three undergraduate interviewers collaborated with two professors on this project. Student-researchers were recruited and selected according to university-specific procedures for hiring student workers, a process which consumed four weeks. The three were selected according to their experience, communication and interviewing skills and career interests. The student-researchers then completed an on-line Protecting Human Research Participants training session and reviewed relevant literature. During training, they were briefed on project goals and timelines, provided with uniform interview protocols, and trained on best practices, including rapport building and confidentiality and anonymity procedures. To foster accountability, each student-researcher submitted an individual work plan with goals and deadlines. To encourage collaboration, student-researchers worked together in role-playing and evaluating each others practice interviews. Student-researchers were compensated monetarily and were offered potential co-

6

authorship of conference papers and articles. One faculty served as primary liaison with the student-researchers, keeping abreast of work schedules and university-required paperwork. Each student-researcher interviewed eleven undergraduates. They aimed for intentional diversity (Anderson & Jack, 1991), selecting participants for diversity concerning age, gender, ethnicity, major, and universities attended. Due to limited experience, first-year students were not heavily recruited. To protect identities, participants were asked to think about particular professors when answering questions, but to avoid using names. To enhance anonymity, participants created their own pseudonyms, and tapes were submitted directly to a professional transcriptionist. Audio-taped interviews averaged 35 minutes each, resulting in 402 pages of verbatim transcripts. A total of 33 undergraduate students, representing a diverse population, participated, including16 females and 17 males. Self-described ethnicity included Caucasian or White (18), Hispanic (6), African American (2), Native American (1), Polish (1) Black and White (1), a regular walking U.N. (1) and three declined labeling themselves. Ages ranged from 19 to 32, with an average of 21.8 years. Sixteen different majors were represented, with 16 attending the same university only, 14 transfers representing 12 institutions, and 3 at other universities. Participants were 2 first-year students, 8 sophomores, 9 juniors, and 14 seniors. Using inductive analysis (Anderson & Jack, 1991), interview transcripts were analyzed by the lead researchers first to identify themes and trends for each participant, and then to identify themes and patterns across research questions. While participants varied in degree of details provided, their experiences point to a wide variety of behaviors and instructional practices. Exemplars were selected according to three criteria: representativeness, the degree to which quotes represent common perspectives or describe problematic interactions experienced

7

by others (similar views); intensity, the degree to which language reflects emotional, cognitive, or behavioral attachment to the category (strong views); and uniqueness, the degree to which quotes capture unique viewpoints not previously expressed (different views) (Van Manen, 1990). Students descriptive language adds authenticity to the study (Manning, 1995). Key Themes and Findings Behaviors Encouraging Out-of-Class Communication RQ1 addressed personal characteristics and behaviors that encourage students to engage in out-of-class communication. Participants provided a total of 174 comments about encouraging out-of-class communication, with ten key qualities discernable. Clearly, in-class communication sets the stage for whether or not students approach faculty outside of class. Table 1: Qualities Most Likely for Out-of-Class Communication

Characteristic Positive Personal Qualities Invited Out-of-class Communication Caring Instrumental Help Positive Interpersonal Skills Availability Challenging/Raising the Bar Express/Discuss Common Interest Good Teacher in Class Recognize Students as Individuals

# of Studentsa 21 21 16 14 10 8 7 7 5 4

# of Statementsb 36 29 29 20 13 16 11 8 8 4

8 Prior to achieving outside connections, teachers must connect with students in class.

Students variously described the most important characteristic that led them to engage in out-ofclass communication as showing empathy or caring about what students are dealing with. Those who showed interest in students lives, and particularly those who showed interest in student success beyond classroom boundaries, received high praise. Along with caring behaviors, positive personal qualities encouraging interaction include nice, honest, great sense of humor, down to earth, open and friendly. Similarly, faculty described as having good interpersonal skills, especially being a good listener encouraged out-of-class communication. The most accessible teachers were described as inviting out-of-class communication, both implicitly and explicitly. Implicit invitations took the form of being approachable, or giving off that inviting feeling that we could meet anytime. Explicit invitations emanated from classroom introductions during the first day of class with faculty actively creating a positive classroom climate. Often mirrored in the course syllabus, statements concerning the teachers commitment to student success and expectations for conversations beyond classrooms were seen as indicative of teachers welcoming of student contact. Typically, approachable teachers provided more time than official office hours, offering help anytime. Several reported teachers who invited feedback via e-mail or cell phones, and a few reported text-messages. Helpfulness was the next key theme. Once students approached professors out-of-class, they expected to receive the help they sought. Students reported receiving tangible assistance on projects, essays, and exams that led to improved understanding and quite often, higher grades. Helpfulness extended to being resourceful and referring students to other on-campus resources. Students are more likely to engage in out-of-class communication with faculty perceived as recognizing individual students needs. They appreciated when faculty knew their names and

9

were aware of any circumstances the students may be dealing with. Students shared positive anecdotes of faculty helping them cope with illness, absences, study strategies, and opportunities to raise grades. At the same time, students are likely to engage in out-of-class communication with professors who challenge students, raise the bar, and help students improve. As one stated, They push you along, but dont hold your hand. Strategies for Encouraging Out-of-Class Communication RQ2 explored specific instructional strategies students reported as effective in encouraging out-of-class communication with professors. Students provided several suggestions that faculty can use to inform their practice. Most obviously, faculty need to be present for office hours, keep appointments, and make time for students when they need help. Students expressed appreciation for positive, one-on-one time, particularly when they received the help they expected. To facilitate quick questions when students are likely to have them and allow for brief exchanges, students expect professors to arrive early to class and stay after class. Classroom management practices also contribute to out-of-class communication. Students responded well to syllabus statements inviting students to visit during office hours. Including a by appointment option is critical since it is likely that professors office hours conflict with students class or work schedules. Letting students know on the first day, with regular reminders throughout the semester about availability for extra help was reassuring. Several students pointed out faculty who wrote e-mail and office hours on the board every class. They were impressed by faculty who seemed to provide a 24/7 open door by providing home phone numbers or cell phone numbers in case students ran into emergencies. One student succinctly suggested, Let us know that you enjoy talking with us, particularly about the course.

10 Students expect respect, positivity, and professionalism. When professors learned

and used students names, they felt more valued, more connected, and more likely to interact out-of-class. Students also suggested that faculty recognize and greet students when they encounter them around campus, and, if possible, exchange basic pleasantries. Given that e-mail is the primary channel for academic and social connections, it is imperative that faculty respond promptly and politely. In addition to brief responses, including a friendly opening and closing personalizes the communication. Students reported faculty sending periodic e-mails to the class offering assistance on projects as they progressed throughout the semester. This was very helpful and positively impacted students performance. In order to increase opportunities for one-on-one exchanges and specific feedback, students responded well to mandatory meetings. A few mentioned mandatory meet-and greets held early in the semester to get acquainted and set goals. Mid-term consultations held with each student to review progress helped motivate them to participate in class and earn higher grades. In summary, positive out-of-class communication begins inside the classroom, with the level of competence a professor enacts, as well as students perceptions of professors caring and helpfulness. Outside the classroom, students benefit from faculty described as approachable and helpful, and those who recognize students as individuals. Positive out-of-class communication transforms student-faculty relations from impersonal to interpersonal, opening doors for mentoring, advising, and collaboration. Students specific suggestions are helpful for engaging students in academic discourse and facilitating deeper understanding. Associated outcomes are increased academic success, increased integration and retention, more engaged learning, and increased satisfaction with academic experiences.

11 Faculty-Student Research Collaboration: Process-Related Issues This study on out-of-class communication was conducted via a faculty-student

collaborative model, with many positive outcomes. This project allowed student-researchers to enhance interpersonal communication and interviewing skills. The project also allowed students an active voice in the research process; both as researchers and participants. Since the studys focus was both relevant and meaningful, it allowed for greater buy-in for the studentresearchers. A key advantage was that participants were most likely more candid with student researchers (as opposed to professors). Information gained by the student-researchers from the 33 student participants was instrumental in developing faculty development workshops and resources and it has been widely disseminated throughout our university. Workshops on enhancing out-of-class communication and on best practices in student advisement have been delivered. A short article was included in our electronic newsletter, a summarized list of students suggestions was included on the back of brochures distributed at a pre-semester faculty forum, and is included in new faculty orientation. While faculty-student collaborative research affords unique leaning opportunities, it has limitations. First, implementing this project took much longer than anticipated; including following university procedures for student employment, and allowing time for recruitment, Institutional Review Board (IRB) training of each student, and IRB approval processes. By the time the data was collected, transcribed, analyzed for emergent themes, and initial outcomes reported, all three student-researchers had graduated and were unavailable to complete the entire project, thus defeating the collaborative process as it was originally designed. Faculty-student collaborative research also assumes a skill set and degree of self-directed learning that may not be practical for all students. In listening to completed interview audiotapes,

12

it was evident that some student-researchers were more invested in their role than others. Lacking apparent questioning and probing abilities, some interviews produced minimal information, resulting in less than optimal data. On a positive note, interview transcripts point to common interviewer errors that can be informative to students studying the interview process. This research began as a faculty-initiated project. Although students had opportunities for input, the project was led by the faculty researchers. To be maximized, learning must be a partnership between dedicated teachers and motivated students. Since a critical component of collaborative inquiry is equitable ownership in the research process, it requires careful selection and training of student-researchers, dedication to team-based meetings, and investment in time beyond a typical semester. For us, for the student-researchers, and for the faculty who will benefit from the outcomes of this project, the investment is certainly worthwhile.

13 References

Ambrose, S. A., Bridges, M.W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How learning works: 7 research-based principles for smart teaching. San Francisco: JosseyBass. Anderson, K., & Jack, D. C. (1991). Learning to listen: Interview techniques and analysis. In S. B. Gluck and D. Patai (Eds.), Womens words: The feminist practice of oral history (pp. 11-26). New York: Routledge. Astin, A. (1993). What matters most in college? Four critical years revisited. San Francisco: Jossey-Bass. Banfield, S.R., Richmond, V.P. & McCroskey, J.C. (2006). The effect of teacher misbehaviors on teacher credibility and affect for the teacher. Communication Education, 55, 63-72. Chickering, A. W., & Gamson, Z. F. (1987). Seven principles for good practice in higher education. American Association for Higher Education Bulletin, 39, 3-7. Dobransky, N. D. & Frymier, A. B. (2004). Developing teacher-student relationships through out of class communication. Communication Quarterly, 52, 211-223. Dolin, D. J. (1995). Aint misbehavin: A study of teacher misbehaviors, related to communication behaviors, and student resistance. (Unpublished doctoral dissertation). West Virginia University, Morgantown. Duran, R. L., Kelly, L. & Keaten, J. A. (2005). College faculty use and perceptions of electronic mail to communicate with students. Communication Quarterly, 53, 159-176. Farley-Lucas, B. S., & Sargent, M. M. (In Press). Enhancing out-of-class communication: Students perspectives. To improve the academy: Vol. 31, Resources for faculty, instructional, and organizational development. San Francisco: Jossey-Bass.

14

Farley-Lucas, B. S., & Sargent, M. M. (2007, July). Checking out mentally: Faculty misbehaviors and impact on students. Conference Proceedings for the international conference on Improving Undergraduate Teaching, Jaen, Spain. http://iutconference.org. Fusani, D. S. (1994). Extra-class communication: Frequency, immediacy, self-disclosure, and satisfaction in the student-faculty interaction outside the classroom. Journal of Applied Communication Research, 22, 232-255. Garko, M. G., Kough, C. Pignata, G., Kimmel, E. B. & Eison, J. (1994). Myths about studentfaculty relationships: What do students really want? Journal on Excellence in College Teaching, 5(2), 51-65. Kearney, P., Plax, T. G., & Burroughs, N. F. (1991). An attributional analysis of college students resistance decisions. Communication Education, 40, 325-342. Kearney, P., Plax, T. G., Hays, L. R. & Ivey, M. J. (1991). College Teacher misbehaviors: What students dont like about what teachers say or do. Communication Quarterly, 39, 309324. Kuh, G. D., Kinzie, J., Schuh, J. H, & Whitt, W. (2005). Student success in college: Creating conditions that matter. San Francisco, CA: Jossey-Bass. Manning, K. (1995). Authenticity in constructivist inquiry: Methodological considerations without prescription. Qualitative Inquiry, 3 (1), 93 115. McKeachie, W. J, & Svinicki, M. (2010). McKeachies teaching tips: Strategies, research, and theory for college and university teachers, 13th ed. New York, NY: Houghton Mifflin. McPherson, M. B., Kearney, P. & Plax, T. G. (2003). The dark side of instruction: Teacher anger as norm violations. Journal of Applied Communication Research, 31, 76-90.

15

Meyers, S. A. (2009). Do your students care whether you care about them? College Teaching, 57 (4), 205-210. Myers, S. A. (2002). Perceived aggressive instructor communication and student state motivation, learning, and satisfaction. Communication Reports, 12, 113-121. Nadler, M. K. & Nadler, L. B. (2001). The roles of sex, empathy, and credibility in out-of-class communication between faculty and students. Womens Studies in Communication, 24, 241-261. Richmond, V. P., Gorham, J. S., & McCrosky, J. C. (1987). The relationship between selected immediacy behaviors and cognitive learning. Communication Yearbook 10, 574-590. Schrodt, P. (2003). Students appraisals of instructors as a function of students perceptions of instructors aggressive communication. Communication Education, 52, 106-121. Van Manen, M. (1990). Researching lived experience. Albany, NY: State University of New York Press. Wanzer, M. B. & McCroskey, J. C. (1998). Teacher socio-communicative style as a correlate of student affect toward teacher and course material. Communication Education, 47, 43-52.

THE 36TH INTERNATIONAL CONFERENCE ON IMPROVING UNIVERSITY TEACHING

Embracing Collaborative Testing for Formative Assessment at UniversitiesFormative Assessment to Promote CollaborationJudith A. Wafula Daystar University, Kenya Admissions and Records Department 19th-22nd July, 2011

Abstract: Collaboration involves working together in small groups to accomplish a task. Collaborative testing (as opposed to individual testing) entails students working together in small groups to answer examination questions. It is based on the premise that group examinations reflect the reality of the work place where professionals work in teams, come up with a set of results and are judged on the basis of the group rather than individual performance. The paper provides an insight into the context of formative evaluation at the Universities and provides suggestions for action. Summary: Individual tests have a tendency of labeling students as poor if they do not meet the set requirements, yet students have specific abilities and talents that can be tapped through collaborative tests. A balance between the approaches in formative and summative evaluation is thus needed in order to have a good picture of students strengths and weaknesses (Garrison & Ehringhaus, 2011).

SECTION 1 Introduction Collaboration is a way of creating consensus while working together to accomplish a task. Learning Point Associate(2010) describes it as a philosophy of interaction and personal lifestyle. Thus it develops ones way of interaction with othersand involves the dialogue between students and the curriculum.However one skill that has been found by employers to be lacking in most graduates is the skill for collaboration(Kapitanoff, 2009). Studies reveal that the bulk of collaboration occurs at team level(Keyton, Ford, & Smith, 2008). Hencecollaborative testing entails students working in teams to handle examination tasks. Relevant to the testing of students are the psychoeducational process model and the task analysis model (Helton, Workman, & Matuszek, 1982).The psychoeducational process model assumes that learning difficulties result from deficits in processing skills such as visual perception, auditory perception and short-term memory. As a result, adequate processing skills are a pre-requisite to academic success and any deficits need to be identified and corrected. On the other hand the task analysis model assumes that learning difficultiesresult from past failures to master pre-requisite academic skills incorporated in a hierarchy of skills. For instance students must learn how to count before learning how to carry out addition. As a consequence it is crucial to identify lower level skills that have not been mastered and to help students master the skills to facilitate academic success. This ensures a sequential fashion of acquisition of skills. Tests should therefore help to identify students deficits whether in the processing skills as advanced by the psychoeducational model or in the mastery of pre-requisite skills as advanced

by the task analysis model. Sandahl (2009) describes collaborative testing as a learning strategy that fosters knowledge development, critical thinking and group processing skills.Interactive environments are created where students take responsibility for their own learning and that of their peers(Panitz, 1996). Further, collaboration benefits remembering (Badsden, Badsden, & Henry, 2000).Hence collaborative testing can help to reduce the gaps in some of the learning deficits asstudentsare given conditions that allow them to learn more from one another during the tests. This method is contrary to the traditional method that emphasizes on individual accountability in the examination process. The traditional school of thought raises concerns on how to deal with those who sit quietly and do not offer any inputs in collaborative testing on one hand and those who impose their ideas on others irrespective of accuracy.These issues are discussed in this paper. The paper focuses on the student-student collaboration with the instructor as the facilitator. It begins by providing an insight into collaborative testing delves into the context of formative evaluation at universities and gives suggestions for action. It is based on the documentary analysis of cases of collaboration in the job industry and personal experiences of classroom collaboration and work collaboration. Though the aim of workplace collaboration is productivity while that for classroom collaboration is student learning the lessons from the corporate and classroom collaboration are mutually instructive(Pelt & Gillam, 1991). Review of Literature Results of a study on collaborative testing and test performance for students in a sociology course done in the College of Charleston showed that collaborative testing alone

(independent of prior collaborative learning) had a significant positive association with test performance that varied with the level of cognitive processing of the test question,(Breedlove, Burkett, & Winfield, 2004). The performance on test questions that emphasized recall was not affected by lack of prior collaborative learning while those that needed higher levels of cognitive processing; such as explaining, interpretation, application, making inferences, drawing conclusions and making generalizations were affected negatively by lack of collaborative learning. Therefore collaboration testing of students without prior collaborative learning gives better performance in less complex questions as opposed to higher cognitive level questions. Consequently for good results, collaborative learning and testing should go hand-in-hand. An experimental study done on the effectiveness of collaborative testing in a computer programming class revealed some positive results such as increased student engagement in the group processes; reinforcement of answers; management by exception;increased quality of questions set by faculty thus increasing the amount of learning,(Simkin, 2005). Further the effectiveness of collaborative testing on student performance was determined in a study at a chiropractic college in USA. A comparison of two cohorts of students: control group (did weekly quizzes, unit examination and final examination individually) and experimental group (did the quizzes collaboratively and the unit and final examinations individually) was made,(Meseke, Nafziger, & Meseke, 2008). The experimental group was found to obtain higher means both in the quizzes and the unit/final examinations evidence that collaborative testing produced increased student performance.

SECTION 11 Collaborative Testing Collaborative testing is an extension of collaborative learning into the evaluative setting,(Breedlove, Burkett, & Winfield, 2004). Students benefit as they sit for the collaborative tests hence are a continuation of the learning process. As opposed to individual testing they entail students working together in small groups of two to six members, Kapitanoff (2009) to answer examination questions. One common set of results that represent the groups collective input can be presented at the end of the interaction or individual results depending on the aim of the test. Consequently, Pelt and Gillam (1991) give two categories of collaboration: Collaborative group work (teamwork collaboration) and shared document collaboration. In collaborative group work one uses input from a team but retains final responsibility of the work and decisions while in shared document collaboration there is shared authority and decision making responsibility for important aspects of the work. In the formative evaluation of students any or both of the two forms can be applied. Collaborative testing is based on the premise that group examinations reflect the reality of the work place where professionals work in teams, come up with a set of results and are judged on the basis of the group rather than individual performance, (Simkin, 2005). As a result the academic practice should prepare students to fit well in work settings. Studies show that it is possible to learn collaborative skills by collaborating,Cortez, Mussbaum, Woywood, & Aravena(2007) thus a need to incorporate collaboration in preparing students for work. Research reveals some benefits of collaborative testing which include elimination ofcheating in examinations; loweringof test anxiety and stress; improvement in student

satisfaction and motivation leading to improved test performance; more learning and development of conflict resolution skills(Meseke, Nafziger, & Meseke, 2008). Studies divulge that anxiety towards tests negatively influences recall of learned information,(Russo & Warren, 1999). Hence collaborative testing helps to eliminate the problem. Students are allowed to work together and build upon each others knowledge thus enhancing understanding.Subsequently collaboration helps one to learn that their way of perceiving situations may not be the only way and that others may have differing and perhaps more accurate perceptions that must be accepted(Tebeaux, 1991) Besides, collaborative testing helps students to perceive exams as learning experiences rather than a chore or punishment(Kapitanoff, 2009). Apart from developing a well versed person in their field of training, collaborative testing also develops positive attributes such as understanding among people, patience, good listening skills, honesty, accommodation for one another and consultative skills as opposed to competition skills. One is involved in the examination process while at the same time learning.The traditional school of thought views the faculty in the collaborative model as a passive facilitator and high scores obtained in collaborative testing as resulting from tasks that are too easy (Kapitanoff, 2009).However, students should be exposed in all spheres of life and not intellectually only. Consequently collaborative testing is a way of developing all-rounded students able to meet challenges in life. Formative Testing at Universities In reality tests can be motivators or on the converse, paralyzing. Nault (1994) gives two analogies of Nina and Alex: For Nina tests are motivators as they make learning happen since they make her read several books and articles adding to her store of knowledge and skills. On the

other hand, Alex is so fearful of the test that he avoids thinking about it. He does no preparation until the night before the test. He spends many hours reviewing the entire course and on the day of the test he forgets whatever he read and only remembers fear, panic and exhaustion surrounding the test. Hence, in the case of Alex the test did not make learning to occur. He does not view tests as valuable parts of his education and only studies out of fear of failure or punishment. These two analogies give a picture of how tests are perceived by students. Collaborative testing makes test experiences more students friendly and enhances learning. Tests are generally used by instructors to obtain information about their students strengths and weaknesses. They can be formative or summative. Summative assessment is done periodically to determine what students know at a particular point in time. They include end of chapter or unit examinations, end of semester examinations, national examinations among others. They help to evaluate the effectiveness of programmes, curricula and goals and to determine student placement in programmes. However the assessment happens too far down the learning path that is does not provide information at the classroom level to make instructional adjustments and interventions during learning, (Garrison & Ehringhaus, 2011). This is achieved through formative assessment. Formative assessment as opposed to summative is a step by step process of evaluation in the course of students learning. It informs instructors about students abilities and enables them to make necessary adjustments in good time while learning continues. Individual tests are usually an easy way of going about the formative evaluation of students. In addition they are more often than not used in the summative evaluation of students at the end of a given learning period therebytraining students to be competitors with one another as opposed to collaborators. The

individual tests do not therefore adequately prepare students for the work environment that requires teamwork to accomplish tasks. SECTION 111 SUGGESTIONS FOR ACTION Introduction Most of the approachesdiscussed below have frequently been used in the teaching and learning of students but have rarely been applied as modes of testing students. The paper emphasizes theapplication of these methods in testing to enhance collaborative skills as opposed to individual testing in the formative evaluation process. These suggestions serve to expand our thoughts in relation to the formative evaluation of students. Designing Collaborative Tests The effectiveness of tests depends on how they are designed, administered and the outcomes they are seeking to achieve(Slusser, 2004). Therefore in designing the collaborative test the instructor ensures that the tasks created are relevant to students needs and require interdependence of the team members to accomplish; the test should fit students skills and abilities and allow a fair division of labour(Davis, 1993). The creativity of the instructor inidentifying tasks that reflect the diverse abilities and needs of students is thus vital. The objectives of the collaborative test should be clear and students can be involved in setting the grading standards.Decisions on the skills to be accomplished; the duration for the test, the venue (in-class or outside class)and the mode of presenting results (team, individual or both) have to be made.The instructor then plans how to organize students into teams. It is helpful for

the instructor to discuss with students some skills they would need to succeed in the teams(Tinzmann, Jones, Fennimore, Bakker, Fine, & Pierce, 1990). In addition students need the liberty to choose from various options that go with their interests to achieve the expected goals. Written contracts that list each members obligation to the team and deadlines for tasks can be made. A variety of test formats(single answer multiple choice, multiple answer multiple choice, short answer test, essay, testlets, drag-and-drop, simlets, simulations, guided designs) can be used in collaborative testing(Garrison & Ehringhaus, 2011). In testlets the test is divided into smaller tests. For instance it can present a case and each testlet can have a different aspect of the case to beanalyzed and resolved. Simulations and simletspresent role playing situations that operate like in real life situations(Western Governors University, 2011). In simulations, students play roles of opposing stakeholders in problematic situations, Smith & MacGregor, (1992), that are very relevant to the job industry. For example, Daystar University commerce students participated in Kenyas Nairobi Stock Exchange Simulation challenge in the year 2010 where they had to buy and sell shares. High profits indicate effectiveness in business strategies by the team.Teamsafterwards reflect and analyze the simulation experience, their actions and those of others. Guided designs use real world problems to teach decision making skills. They can be print, web based or computerized(Wales & Stager, 1978).For instance they can present written simulations to be tackled by teams. The main emphasis is for students to experience the design process as they deliberate on decisions for example in coming up with a patient care plan. Students define a problem; state objectives listing constraints, assumptions and facts known;

generate possible solutions and evaluate using some criteria then select one solution; implement decision; evaluate results and make recommendations (White & Coscarelli, 1986). Ensuring Participation in Collaborative Testing There are different ways of testing students collaboratively yet ensuring participation by all students. For instance, the test can be divided into discrete parts and each member of a team given a section to tackle with exchange of ideas among members. Members of the team then share their work and write a team document or individual document depending on the test requirements. Alternatively two members can tackle a section instead of one. The whole teamcan also tackle the whole test together as in the instances of team projects. Members of the team assume different roles to enable a smooth process. The team generates content then holds sessions to edit and review the content, writing style and other aspects of the work to produce the final results. Hence the use of computers in collaborative tests should be encouraged as they foster interaction, review of documents and merging. Besides, sections of the test can be tackled by different teams followed by discussions among the different teams but eachteam comes up with their final results.Review teams can also be used to review the work of various teams. During the collaborative work, studies show that students experienced a variety of social challenges but they are able to regulate emotions collaboratively(Jarvenoja & Jarvela, 2009). Therefore students should be allowed to develop the experience of solving their conflicts. In scoring ateam document the instructor can decideto re-administer the same test to individual studentsthen take the weighted mean of the two scoresor opt to give a single score to all the students in the small team, (Simkin, 2005).For example in a collaborative assignment

todeterminethe best measure of central tendency for thewealth of aprovince and reasons thereof in my Introduction to Statistics class; students were required to do the assignment individually followed by a collaboration of three. The individual results differed with the group responses evidence of re-evaluation of ideas. A policy is then requiredto specify the weight to be allocated to the individual and collaborative score. . Conclusion Collaborative testing is more relevant to the work environment for which the students are being prepared and enhances the test-taking skills of students even in individual examinations,(Lusk & Conklin, 2003). It is more effective in fostering student learningnot only in the subject area but also personality and life skills.It also helps students to detach themselves from their notions, accept and evaluate criticisms and suggestions of others, and incorporate both their own views and others views in their work(Pelt & Gillam, 1991). Consequently there is the development of the group identity and strategies for working together. By embracing collaborative testing students have so much to benefit.

REFERENCESBadsden, B. H., Badsden, D. R., & Henry, S. (2000). Cost and Benefits of Collaborative Remembering. Journal of Applied Cognitive Psychology, 14(6), 495-507. Breedlove, W., Burkett, T., & Winfield, I. (2004). Collaborative Testing and Test Performance. Academic Exchange Quarterly, 8(3). Cortez, C., Mussbaum, M., Woywood, G., & Aravena, R. (2007). Learning to Collaborate by Collaborating: A Face-to-Face Collaborative Activity for Measuring and Learning Basics about Teamwork. Journal of Computer Assisted Learning, 25(2), 126-142. Davis, B. G. (1993). Collaborative Learning: Group Work and Study Teams. San Francisco: Jossey-Bass. Garrison, C., & Ehringhaus, M. (2011, January 25). Formative and Summative Assesments in the Classroom. Retrieved January 21, 2011, from National Middle School Association: http://www.nmsa.org Helton, G. B., Workman, E. A., & Matuszek, P. A. (1982). Psychoeducational Assessment: Integrating Concepts and Techniques. New York: Grune & stratton. Jarvenoja, H., & Jarvela, S. (2009, September). Emotion Control in Collaborative Learning Situations: Do Students Regulate Emotions Evoked by Social Challenges? British Journal of Educational Psychology, 79(3), 463-481. Kapitanoff, S. H. (2009). Collaborative Testing Cognitive and Interpersonal Processes Related to Enhanced Test Performance. Sage Journals, 10(1), 56-70. Keyton, J., Ford, D. J., & Smith, F. I. (2008, August). A Mesolevel Communication Model of Collaboration. Communication Theory Journal, 18(3), 376-406. Learning Point Associate. (2010). Heterogeneous Grouping. Retrieved January 19, 2011, from North Central Regional Educational Laboratory: http://www.ncrel.org Lusk, M., & Conklin, L. (2003, March). Collaborative Testing to Promote Learning. Journal of Nursing Education, 42(3), 121-124. Meseke, C. A., Nafziger, R. E., & Meseke, J. K. (2008). Student Course Performance and Collaborative Testing: A Prospective Follow-On Study. Journal of Manipulative and Physiological Therapeutics, 611- 615. Nault, W. H. (1994). Testing. In M. M. Liebenson, L. A. Klobuchar, M. Feely, J. T. Peterson, & M. Norto, The World Book of Study Power (Vol. 2, pp. 238-260). New York: World Book Inc. Panitz, T. (1996, June). A Definition of Collaborative vs Cooperative. Retrieved January 12, 2011, from friendsofchalkbyte.org: http://www.google.scholar.com

Pelt, W. V., & Gillam, A. (1991). Peer Colaboration and the Computer-Assisted Classroom. In M. M. Lay, & W. M. Karis, Collaborative Writing in Industry: Investigations in Theory and Practice (pp. 170-209). New York: Baywood Publishing Company,Inc. Russo, A., & Warren, S. H. (1999). Collaborative Testing. College Teaching, 47. Sandahl, S. (2009, May). Collaborative Testing as a Learning Strategy in Nursing Education: A Review of the Literature. Retrieved January 20, 2011, from HighBeam Research: http://www.highbeam.com Simkin, M. G. (2005). An Experimental Study of the Effectiveness of Collaborative Testing in an Entry- Level of Computer Programming Class. Journal of Information Systems Education. Slusser, S. (2004, August). Group Quizzes and Attitudes: Collaborative Testing's Effect on Students Attitudes. Retrieved January 16, 2011, from Allacademic Research: http://www.allacademic.com Smith, B. L., & MacGregor, J. T. (1992). What is collaborative Learning? In A. Goodsell, M. Maher, V. Tinto, B. L. Smith, & J. MacGregor, Collaborative Learning: A Sourcebook for Higher Education. National Center on Postsecondary Teaching, Learning and Assessment at Pennsylvania University. Tebeaux, E. (1991). The Shared-Document Collaborative Case Response:Teaching and Research Implications of an In-House Teaching Strategy. In M. M. Lay, & W. M. Karis, Collaborative Writing in Industry: Investigations in Theory and Practice (pp. 124-145). New York: Baywood Publishing Company Inc. Tinzmann, M. B., Jones, B. F., Fennimore, T. F., Bakker, J., Fine, C., & Pierce, J. (1990). What is the Collaborative Classroom? Retrieved January 19, 2011, from NCREL: http://www.arp.sprnet.org Wales, E. C., & Stager, R. A. (1978). The Guided Design Approach. Englewood Cliffs: Educational Technology Publications. Western Governors University. (2011). Difference Between Simlets and Simulations. Retrieved March 14, 2011, from Techexams.net: www.techexams.net White, P. G., & Coscarelli, C. C. (1986). The Guided Design Guidebook. Morgantown, WV:West Virginia University: National Center for Guided Design.

Title: Measuring and Building Classroom Communities Conference Title: 35th International Conference on Improving University Teaching in Bielefeld, Germany. Conference Sub-Theme: Collaboration and Active Learning Authors: Dr Tony Holland & Dr Robert Pithers University of Technology Sydney Australia

Abstract: This paper looks at a study of the concept of Classroom Community within several student cohorts. The literature shows that success for a student at university in terms of academic achievement as well as course engagement can be enhanced by the development of a sense of community within the learners. This can take the form of both the social community and the learning community. The data collected in this study using a Classroom Community Scale shows that different student cohorts within a single Faculty have different senses of community and looks at the factors for developing an increased sense of community.

Measuring and Building Classroom Communities Introduction In recent years as evidenced in the published literature, there appears to be increased attention given to the concept and function of community. Indeed the development and support of learning communities are suggested as a way to facilitate student motivation, persistence and learning. For example, Booker (2008) has stated that the need to belong to a sense of community is quite important during an undergraduate experience, especially now that higher education institutions are faced with an influx of students from diverse populations. Added to this challenge, of course, is the fact that many institutions have pursued flexible delivery options such as distance education and e-learning, leading to a decrease in attendance rates and face to face teaching/learning opportunities. A lack of opportunity for face to face communication and potential interaction with teachers and peers, it has been argued, will weaken the development of a sense of learning community (eg Palloff & Pratt 1999; Rovai, 2002). As Rovai et. al. (2004) have noted, research has indicated that a strong sense of community is related to the construction of cognitive knowledge and as well as increased student persistence and satisfaction, especially important for students in distance and e-learning courses where attrition rates tend to be relatively high (Carr, 2000; Tinto, 1993). Of course, just what is meant by the term community remains a point of issue in the published literature. Osterman (2000) has pointed to the fact that multiple terms are used in the literature about campus community, including terms like: engagement, belongingness, relatedness and connectedness, An instance here is, for example, McMillan (1996, p. 315) who views a sense of community as belonging together, including a view that there is a trusted authority structure within which trade and mutual benefits progress from togetherness and a developing spirit that comes from shared experiences. Other researchers have added important points such as Cheng (2004), who sees a most important principle of community is one which must involve faculty and students in a common focus of teaching and learning. Rovai (2003) has gone further to distinguish two distinct aspects of a campus community: social community and learning community. Social community tends to be based on the sort of dimensions just outlined, whilst learning community is about the extent of students feelings about their shared group norms and values, concerned with how their perceived educational goals and expectations are satisfied by group membership. Here the learning environment or situation is important. In all the forgoing, however, it should be noted how different aspects of community could vary from situation to situation and, in a psychological sense, how difficult it is to operationalise the community sub-concept notions. Nonetheless, the published literature does show attempts to research collaboration and culture as integral to the learning process (eg Colbert, 2010) using empirical analytical approaches. Some examples are as follows: Ritter et. al. (2009) investigated the perceptions of educational leadership, post-graduate students concerning how well their face to face, online and hybrid classes developed a sense of community, using a Classroom Community Scale which purported to measure sense of community, connectedness and learning. A greater sense of community was found in the face to face classes and in the hybrid classes than in the online classes. Connectedness was also statistically higher in these groups than in the online group. There appeared to be no between-group differences in the students perception of their learning.

A study by Brooker (2008) examined student perceptions of their most and least favoured classroom community by interactions with faculty and peer group. A greater number of students attributed positive experiences and a sense of connection to their teacher in the favourite classroom. Female students, it was found, rated teachers in their favourite classes higher than male students. Other studies have considered a different focus to connectedness and community. For example, Colbert (2010) looked more at teacher beliefs, values, understandings and assumptions about their students and the need to recognise different student qualities and then redesign classroom interactions to connect content with student backgrounds. Other studies have looked at the outcomes of participation on intensive long-term learning communities (Etelapelto, et. al. (2005) or more important here, at the issue of classroom community and student goals (Summers & Svinicki, 2007). These authors found that classroom community was significantly higher when associated with cooperative learning classes, compared with lecture style groups, though performance approach was significantly lower in the cooperative learning groups. Vescio et al. (2007) completed a review of chosen studies on the impact of professional learning communities on teaching practice and student learning. They chose and examined 11 empirically based studies in this area and concluded that, overall, the results suggested that well developed professional learning communities can have a positive impact on teaching practice and student achievement. Findings included, amongst other things, greater teacher student centredness, increased collaboration and a renewed focus on learning and achievement. This later factor of the learning community had most effect on student achievement. Faced with the challenge associated with operationalising the concept of classroom community, one worker in the field namely, Rovai (2002) has developed and field tested the Classroom Community Scale (CCS) and determined its validity and reliability for use with university students in the USA. Following his study of the preliminary CCS, using 375 students from 28 different courses, he found that the scale did appear to be both a reasonably valid and reliable instrument for the measurement of classroom community. Furthermore, a statistical analysis of the data indicated two factors were involved; connectedness and learning. The test instrument therefore, generates an overall measure of classroom community as well as the two aforementioned subscales. It should be recalled that connectedness represents student community feelings of cohesion, spirit, trust and interdependence. Learning represents community feelings of between-others interactions as they pursue understanding as well as the degree of shared values and beliefs about the degree to which their goals are being satisfied. Rovai et. al. (2004) refined the CCS further and after examining their further data on validity and reliability measures, concluded that there was now sufficient evidence to use the CCS in educational research. The instrument will be used in the present research. It is interesting to note that the results from Rovais (2002) study showed that Classroom Community scores were relatively stable across ethnic groups and by course content area. Rovai also claimed some evidence that female students on average, possessed a significantly higher mean Classroom Community score. There appears to have been a rather limited number of published studies which have used the CCS in university settings. Rovai (2002b) once again used the scale to find a significant relationship existed between Classroom Community and perceived cognitive learning but only with online learners. He argued this was because those learners with a perceived sense of community, should feel less isolated and have greater satisfaction with their academic work. The major problem here, however, was

that no independent measure(s) of learning was undertaken; results were based on their own perception. Furthermore, improved learning may lead to an increased sense of community and not vice-versa. Nevertheless the use of the CCS as a useful educational research tool was further enhanced by a study by Dawson (2006) using 464 Australian university students. Again the students data was obtained from online courses but it did demonstrate the existence of a significant positive relationship between the frequency of student communication and a sense of community as measure by Rovais (2002) CCS. Given the relatively focussed and limited use of the CCS so far, it was decided to use the CCS with another group of university students of diverse backgrounds, experiences and ethnic origins. Subjects The participants were all current students enrolled in education or organisational learning courses at a major Australian University. There were 50 students in all who completed the CCS. There was approximately a 70:30 mixture of males and females whose ages ranged from 19 to 55 years. There were 30 students in a Post-Graduate Group and 20 students in an Undergraduate group. The undergraduate students were studying full time, while the post-graduate students were all enrolled on a part-time basis. Research Instrument The research instrument used to gather the data was basically the Classroom Community Scale (CCS) developed and refined by Rovai (2002) and Rovai et. al. (2004). The CCS is a self-report scale which, as already mentioned, measures sense of community. The scale consists of 20 items such as I do not feel a spirit of community and I feel confident that others will support me. Some of the items examine a sense of connectedness in the classroom, while others look at it in terms of the learning environment. A 5-point Likert Scale is attached to each item (Strongly Agree to Strongly Disagree). Total points were assessed using these 5-point items (some items of which are reverse scored to reduce response bias). Higher scores reflect a higher sense of community, as do higher scores for the Connectedness and Learning Sub-scales. The basic meanings of these two sub-scales concepts were explained briefly earlier in the introduction. Reliability for the CCS was found to be .93 (Cronbachs alpha) and for the Connectedness and Learning subscales were .92 and .87, respectively. Demographic questions were added to the CCS to look at variables that might cause the results to vary. Procedure The data were collected during Semester 1. Students were all volunteers, under no obligation to complete the scale. The data were collected in 2 different small group settings within a week of each other (approximately week 4-5 of semester). Results Descriptive statistics were used to determine results for the three key measures of classroom community, namely Connectness, Learning and Classroom Community. Table 1 summarises the average scores obtained from the CCS as well as providing similar comparative data from several other relevant studies.

Table 1. Means for Connectness, Learning and Classroom Community. U/G Group Connectness Learning Classroom Community 24.5 28.4 52.9 P/G Group 21.3 28.6 49.9 Rovai Group 26.5 30.2 56.7 Dawson Group 22.1 25.5 47.6

A between-group comparison for the means reported from this study, showed a classroom community score for the undergraduate group and the post-graduate group of 52.9 and 49.9, respectively. Means scores for the learning sub-groups were almost identical. The post-graduate group scored lower for the sub-scale Connectness. A 2x2 Chi-squared analysis by group and sub-scale score was significant at the p700 students) is a challenge to every lecturer. Because of the rising shortage of skilled workers particularly in engineering education new ways to provide high-quality education while at the same time allowing for large audiences need to be designed. An essential way to improve engineering education is seen in the shift from teaching to learning, i. e. from teacher-centered to student-centered education. A possible strategy of student-centered learning is project-based learning which facilitates action-oriented and sustainable learning. Although it is a big challenge, project-based learning can also be successfully used in large group study courses. Besides a description of cen