collaborative evaluation communities in urban schools: a model of evaluation capacity building for...

13
This chapter describes a new model of evaluation capacity building designed to build the organizational evaluation capacity of schools while simultaneously developing the evaluation capacity of individual teachers and STEM graduate students. 5 Collaborative Evaluation Communities in Urban Schools: A Model of Evaluation Capacity Building for STEM Education Douglas Huffman, Frances Lawrenz, Kelli Thomas, Lesa Clarkson This chapter describes a new model of evaluation capacity building (ECB), one that advances notions of evaluation capacity building in STEM educa- tion and encourages new ways of thinking about ECB by linking concepts of organizational capacity building with the individual capacity building of K–8 teachers and graduate students. Stockdill, Baizerman, and Compton (2002) recently presented a framework and working definition of ECB. They defined it as “the intentional work to continuously create and sustain overall organizational processes that make quality evaluation and its use routine” (p. 14). They also described eleven elements of ECB to give a com- prehensive overview of the complex nature of ECB. However, as Stockdill, Baizerman, and Compton point out, much work remains to further the ECB concept. One way to move the concept forward is by presenting models of ECB and testing them in real-world settings. In this chapter we present such a model. The project we describe uses an immersion approach to develop the evaluation capacity of urban schools and of STEM graduate students. The project constitutes a unique model of how to involve K–8 teachers and graduate students in the evaluation process in an attempt to develop the evaluation capacity of schools, teachers, and graduate students. NEW DIRECTIONS FOR EVALUATION, no. 109, Spring 2006 © Wiley Periodicals, Inc. Published online in Wiley InterScience (www.interscience.wiley.com) • DOI: 10.1002/ev.179 73

Upload: douglas-huffman

Post on 11-Jun-2016

215 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Collaborative evaluation communities in urban schools: A model of evaluation capacity building for STEM education

This chapter describes a new model of evaluationcapacity building designed to build the organizationalevaluation capacity of schools while simultaneouslydeveloping the evaluation capacity of individual teachersand STEM graduate students.

5

Collaborative Evaluation Communitiesin Urban Schools: A Model ofEvaluation Capacity Building forSTEM Education

Douglas Huffman, Frances Lawrenz, Kelli Thomas, Lesa Clarkson

This chapter describes a new model of evaluation capacity building (ECB),one that advances notions of evaluation capacity building in STEM educa-tion and encourages new ways of thinking about ECB by linking conceptsof organizational capacity building with the individual capacity building ofK–8 teachers and graduate students. Stockdill, Baizerman, and Compton(2002) recently presented a framework and working definition of ECB.They defined it as “the intentional work to continuously create and sustainoverall organizational processes that make quality evaluation and its useroutine” (p. 14). They also described eleven elements of ECB to give a com-prehensive overview of the complex nature of ECB. However, as Stockdill,Baizerman, and Compton point out, much work remains to further the ECBconcept. One way to move the concept forward is by presenting models ofECB and testing them in real-world settings. In this chapter we present sucha model. The project we describe uses an immersion approach to developthe evaluation capacity of urban schools and of STEM graduate students.The project constitutes a unique model of how to involve K–8 teachers andgraduate students in the evaluation process in an attempt to develop theevaluation capacity of schools, teachers, and graduate students.

NEW DIRECTIONS FOR EVALUATION, no. 109, Spring 2006 © Wiley Periodicals, Inc.Published online in Wiley InterScience (www.interscience.wiley.com) • DOI: 10.1002/ev.179 73

Page 2: Collaborative evaluation communities in urban schools: A model of evaluation capacity building for STEM education

74 CRITICAL ISSUES IN STEM EVALUATION

Building the evaluation capacity of K–12 schools is clearly an importantgoal for the field of evaluation, especially in the current educational envi-ronment, which is dominated by issues of accountability and high-stakestesting. In a recent issue of New Directions for Evaluation, King presented acase study of her work on evaluation capacity building in a large suburbanschool district (2002). The case illuminates the challenges confronting largedistricts as they attempt to increase their capacity to conduct and use pro-gram evaluation. K–12 schools face an onslaught of tests designed to judgewhether or not they are making adequate yearly progress (AYP) in increas-ing student achievement. Currently, mathematics and reading test scores arethe main measures of AYP, but starting in 2007 science test scores will alsobe required as part of the No Child Left Behind Act. The tests scores aremade public for all to see, and schools that do not make the goal are singledout and risk being placed on a needs-improvement list, or even worse, sub-jected to external intervention.

The focus on student achievement has forced schools to become moredata-driven as they attempt to analyze test scores and make decisions abouthow students can improve scores the next year. To some extent, schools havebecome overwhelmed with data. State, district, and school- and classroom-level assessments have all led to more data than schools can reasonably man-age. One could argue that schools need to develop evaluation capacity tomanage and use the multitude of data. As schools build evaluation infra-structure, administrators and teachers need support to develop evaluationknowledge and skills as well as increase their ability to conduct programevaluation.

The field of evaluation is also facing a shortage of evaluators with aSTEM background who can evaluate programs that require STEM expertise.The National Science Foundation (NSF) has placed priority on developingSTEM evaluators who have both a strong STEM background and an under-standing of educational contexts. In an attempt to address the lack of qual-ified STEM education evaluators and to help schools and teachers buildevaluation capacity, we designed an immersion model of ECB that serves tosimultaneously develop the evaluation skills of STEM graduate students andhelp K–8 schools develop their evaluation capacity.

NSF Evaluation Capacity-Building Efforts

The Collaborative Evaluation Communities project (CEC) described in thischapter is designed to build on previous evaluation capacity-building efforts.The most common method of developing STEM evaluation graduate studentsacross the country is through university-based programs. Graduate stu-dents typically take coursework in evaluation along with internship experiencein the field. There are also a variety of institutes and workshops offered byorganizations and universities. Several NSF-funded programs involve training

NEW DIRECTIONS FOR EVALUATION • DOI 10.1002/ev

Page 3: Collaborative evaluation communities in urban schools: A model of evaluation capacity building for STEM education

COLLABORATIVE EVALUATION COMMUNITIES IN URBAN SCHOOLS 75

graduate students. NSF funded the AERA/NSF Evaluation Training Fellow-ships in 1997, building evaluation capacity one evaluator at a time. The fel-lowship program provided a grant to the American Educational ResearchAssociation to fund institutions of higher education that in turn offered fel-lowships for six to nine graduate students. Although the program successfullyeducated a group of STEM education evaluators, one of its limitations was thedifficulty of closely linking internship experience to STEM education evalua-tion (AERA, 2002). Other NSF-funded programs designed to develop STEMgraduate students are the Graduate Teaching Fellows in K–12 Education(GK–12) and the Centers of Learning and Teaching.

The NSF also funded training institutes with an emphasis on under-represented groups. Howard University developed an Evaluation TrainingInstitute that included professional development to broaden mathematicsand science evaluators’ knowledge, awareness, and application of cultur-al and contextual competence in evaluation models, methods, standards,and guiding principles for use with diverse populations. Western MichiganUniversity used a slightly more complex approach to STEM evaluationcapacity building, called Materials Development, Training, and SupportServices (MTS) for STEM evaluators. The project had two components, athree-week institute and an internship program for selected institute par-ticipants. Internship experiences were offered at NSF-funded AdvancedTechnological Education projects (ATE). The internship portion of the MTSproject was designed to improve evaluation capabilities of institute partici-pants, serve the evaluation needs of NSF projects, increase project-level skillin evaluation, and construct evaluation tools.

Previous ECB efforts such as those already described have created train-ing opportunities in STEM evaluation, but the field of STEM evaluationneeds a more comprehensive model of ECB. Overall, there is a need for eval-uation training programs to move beyond a training view of evaluationcapacity building, to include long-term activities and experiences that canimmerse graduate students and stakeholders (such as K–12 school person-nel) in evaluation. It is also important to foster collaborative experiencesand not just individualized experience, especially since the practice of eval-uation is often highly collaborative. This involves workshops, education,and institutes, along with immersion in real-world internships.

We know from research on professional development in other fieldssuch as teacher education that immersion in longer-term professional devel-opment activities is preferred (Hawley and Valli, 1999). Short-term work-shops and institutes are less likely to be translated back into practice(Loucks-Horsley, Hewson, Love, and Stiles, 1997). In addition, we believethe persistent achievement gap in mathematics and science that plaguesurban school districts makes it especially important to develop STEM eval-uators who can work in diverse settings, such as urban schools. In an at-tempt to address these needs in STEM education evaluation, we designed

NEW DIRECTIONS FOR EVALUATION • DOI 10.1002/ev

Page 4: Collaborative evaluation communities in urban schools: A model of evaluation capacity building for STEM education

76 CRITICAL ISSUES IN STEM EVALUATION

the Collaborative Evaluation Communities in Urban Schools project (CEC).The CEC project was funded by NSF, and the ECB work began in early2005 (Evaluative Research and Evaluation Capacity grant no. 0438069).

Urban Schools Immersed in Collaborative EvaluationCommunities

Collaborative evaluation efforts between K–12 urban schools and universi-ties can promote unique infrastructure development that serves the needs of the schools by enhancing their evaluation capacity. The key conceptbehind the CEC project is that by immersing both teachers and STEM andSTEM education graduate students in the evaluation process, we can helpbuild evaluation capacity and bridge the gap between district evaluationefforts and the teaching and learning of science and mathematics. Teachersneed assistance in deciphering and understanding data to change practices;one way to do so is through a collaborative evaluation community. Whenteachers and university personnel engage in collaborative evaluations, anopportunity to share expertise in understanding instructional issues is cre-ated (Elmore and Burney, 1999; Fullan, 1993; Little, 1999; Sarason andLorentz, 1998). The immersion experience of the CEC project is highly col-laborative; we are developing teams of K–8 teachers, school administrators,district personnel, university STEM evaluators, and STEM (and STEM edu-cation graduate) students. Other forms of collaboration such as teacherinquiry, reflection, and data-based decision making have all been shown tobe powerful tools for influencing an individual’s beliefs and theories of teach-ing and learning (Bissex, 1994; Cochran-Smith and Lytle, 1993; Kalnin,2000). Huffman and Kalnin (2003) reported that when science and mathe-matics teachers engaged in collaborative inquiry, they not only changed theirinstructional practices but also began to make schoolwide changes. The CECproject is presenting both professional growth activities for mathematics andscience teachers and long-term collaborative inquiry experiences as a meansof developing the evaluation capacity of K–12 urban schools while simulta-neously developing the evaluation expertise of STEM and STEM educationgraduate students. By engaging teachers in the evaluation process, we assistschool districts in developing an evaluation culture designed to supportteachers in continuous examination of programs, with the ultimate intent ofimproving educational opportunities for all students. Collaborative effortsthat engage individuals throughout a district in a shared evaluative processof data gathering and analysis can lead to sustained improvement in scienceand mathematics learning. These collaborative efforts support participantsin building contextualized knowledge of their students, their science andmathematics programs, and the school community. Cochran-Smith and Lytle(1999) describe such collaboration as essential to developing teaching andlearning knowledge.

NEW DIRECTIONS FOR EVALUATION • DOI 10.1002/ev

Page 5: Collaborative evaluation communities in urban schools: A model of evaluation capacity building for STEM education

The collaborative, team-based approach of the CEC project is address-ing the organizational structures in urban schools that have hampered pre-vious change efforts. To improve student achievement, Sarason (1996)urges districts and universities to create opportunities for collaboration thatmove across the current hierarchy. He concludes, “Teachers cannot createand sustain contexts of productive learning for students if those contextsdo not exist for teachers” (pp. 253–254). Involving all CEC participants insignificant and worthwhile evaluation activities is a challenge of the CECproject. The participants come to the evaluation process with their ownexpertise and background knowledge, along with differing expectations andgoals. The challenge is to create and sustain a collaborative evaluation com-munity that can bring these diverse views together in a way that is produc-tive and useful for everyone.

The key to sustainability is to design collaborative evaluation commu-nities that serve the needs of the urban schools and at the same time thoseof faculty and graduate students. This means we must engage in evaluationthat serves the everyday teaching and learning needs of science and mathe-matics teachers, while engaging in program evaluation that can addresslarger evaluation issues across schools. In her work with evaluation capac-ity building in schools, King (2005) highlights five key activities that, whenimplemented over several years, are important to developing a culture forevaluation: “(1) creating an evaluation advisory group, (2) beginning tobuild a formal evaluation infrastructure, (3) making sense of test scores, (4)conducting one highly visible participatory inquiry, and (5) institutingaction research activities” (p. 90). The CEC project we developed employssimilar activities.

Graduate Students Immersed in CollaborativeEvaluation Communities

The collaborative evaluation communities also serve to develop the evalu-ation skills of STEM and STEM education graduate students. The CEC proj-ect is helping to develop new evaluators by immersing graduate students inevaluation and by creating coursework and professional learning opportu-nities. A unique feature of the CEC project is its emphasis on encouragingscience, mathematics, engineering, and education graduate students to enterthe field of educational evaluation. At present, the majority of STEM Ph.D.students do not have a background in educational evaluation. If the field isto have strong evaluations of mathematics and science programs, then wemust encourage graduate students from science- and mathematics-relateddisciplines to develop expertise in evaluation. In the CEC program, gradu-ate students add the equivalent of a minor in evaluation studies to theirexisting degree in the sciences, mathematics, engineering, or science andmathematics education. We offer graduate students a chance to immerse

COLLABORATIVE EVALUATION COMMUNITIES IN URBAN SCHOOLS 77

NEW DIRECTIONS FOR EVALUATION • DOI 10.1002/ev

Page 6: Collaborative evaluation communities in urban schools: A model of evaluation capacity building for STEM education

themselves in a rich learning environment that includes working on anurban school evaluation project, coursework from some of the top evalua-tors in the country, and learning experiences at professional conferencesand workshops around the country.

In the CEC project, we link graduate education with development ofevaluation capacity in urban schools. The model involves immersing gradu-ate students in real-world evaluation experiences through ever-increasingdifficulty and challenge (see Figure 5.1). In the CEC project, each graduatestudent engages in a collaborative evaluation community for two years ofgraduate study. Participation is organized to give graduate students an intro-ductory evaluation experience the first year, followed by more complex andmore advanced experiences in the second year. For example, in the first yeargraduate students assisted faculty in creating a collaborative evaluation com-munity and focused on one aspect of an evaluation, such as design and analy-sis of a survey that was part of a larger evaluation. They were still involvedin other aspects of the evaluation in a supporting role. In the second year,graduate students assume greater responsibility. For example, they partici-pate in analyzing and reporting data that will culminate in dissemination ofresults at a professional conference or in producing a journal article for pub-lication. By the third year, the graduate students will no longer be funded bythe grant and will begin work on their thesis. For graduate students whojoined our program from the sciences or mathematics, the thesis will be intheir discipline. Graduate students in education will be encouraged to designa thesis around the collaborative evaluation communities.

78 CRITICAL ISSUES IN STEM EVALUATION

NEW DIRECTIONS FOR EVALUATION • DOI 10.1002/ev

Figure 5.1. Overview of Graduate Education

Page 7: Collaborative evaluation communities in urban schools: A model of evaluation capacity building for STEM education

In the project, the University of Kansas and the University of Min-nesota are collaborating on development of graduate students. To encour-age cross-university collaboration, the graduate students and faculty meetvia teleconference twice each semester. This allows us to hold seminarswith all the graduate students to discuss evaluation issues, share feedbackon design and development of the collaborative evaluation communities inthe schools, and lend important personal support to the process of evalu-ation. Graduate students also attend professional conferences periodicallythroughout the program, such as the annual meeting of the AmericanEvaluation Association.

The Inquiry Cycle in Collaborative Evaluation Communities

The overview of the CEC given here permits insight into how this newimmersion model operates. As mentioned, formation of teams of participantsis central to the collaborative nature of the CEC project. CEC teams com-prise teachers, school administrators, district personnel, graduate students,and university evaluators; the teams are established at schools to immerseparticipants in the process of evaluation. The CEC model of ECB uses aninquiry cycle to engage participants in the ongoing process of evaluation (seeFigure 5.2). The cycle begins with collaborative examination of studentachievement data at the national, state, and local levels involving input fromuniversity evaluators, K–12 personnel, and STEM graduate students.Following initial exploration of existing achievement data, CEC participantsconsider how the data might inform evaluation of mathematics and scienceprograms in the school district. The first two steps in the process aredesigned to help participants explore data and think about the implicationsof the data for their own district. Exploring data typically produces manyquestions and serves as an excellent starting point for the evaluation process.

In the next step, CEC participants collaboratively establish a focus foran initial evaluation. They are asked to identify broad issues related to stu-dent achievement in mathematics or science, drawing on the data explo-ration of the first two steps of the inquiry cycle. Reflection on the issuesleads the CEC participants to create evaluation questions that can serve asthe focus for their upcoming investigations. Selection of focus areas is fol-lowed by development of an initial evaluation plan to guide data collec-tion and analysis. The next two steps in the inquiry cycle immerse theparticipants in data collection and analysis. This involves developing datacollection instruments and data analysis procedures that CEC participantsuse to answer the evaluation questions in their plan. The procedures usedby the CEC teams in these two steps contribute to ECB by beginning toestablish the infrastructure necessary for sustainable inquiry and evalua-tion in schools.

COLLABORATIVE EVALUATION COMMUNITIES IN URBAN SCHOOLS 79

NEW DIRECTIONS FOR EVALUATION • DOI 10.1002/ev

Page 8: Collaborative evaluation communities in urban schools: A model of evaluation capacity building for STEM education

80 CRITICAL ISSUES IN STEM EVALUATION

On the basis of what is learned from the initial data collection andanalysis, communities develop specific action plans to address the problemsthey identified. The action plans require continuous monitoring, more datacollection, and analysis. This in turn typically leads to more questions, moredata collection, and more analysis. In the end, the process helps establish acontinuous improvement cycle that can aid the collaborative evaluationcommunities in using evaluation as a tool to create change and empowerteachers to become leaders in their schools. The inquiry cycle we havedescribed includes aspects of the evaluative inquiry process promoted byParsons (2002) and reflects the working definition of ECB given byStockdill, Baizerman, and Compton (2002), emphasizing the importance ofthe “on-going, open-ended, and emergent nature of ECB work—sustainingevaluation and its uses” (p. 13).

An Example: The CEC Model of ECB in an UrbanElementary School

The CEC project in one school began with a small group seminar focused onexploring student achievement data and considering implications for theschool district—the first two steps in the inquiry cycle. During the seminar,the collaborative evaluation communities from two elementary schools exam-ined U.S., state, and local assessment data furnished by the university evalu-ators. Exploration of data from the Trends in International Mathematics andScience Study (TIMSS), state-administered mathematics and science tests,and school district tests constituted a basis for the participants to considerthe implications for student achievement in mathematics and science in theirown school district. The university evaluators asked the CEC participants togenerate concept maps reflecting issues or problems that might be influenc-ing mathematics achievement in the district. These first two steps in the

NEW DIRECTIONS FOR EVALUATION • DOI 10.1002/ev

Figure 5.2. Inquiry Cycle

Page 9: Collaborative evaluation communities in urban schools: A model of evaluation capacity building for STEM education

inquiry cycle let the CEC teams begin connecting student achievement issuesto student assessment data from a broader educational perspective.

The second small group session continued moving CEC participantstoward finding a focus for evaluation in their school. Participants left thefirst seminar with the task of individually identifying four or five issues orproblems that contributed to mathematics or science student achievementoutcomes in their own school. At one school, nearly all the CEC partici-pants identified issues and problems related to science achievement. As aresult the group decided to focus on the science program at the school. Theteachers seemed open and willing to explore factors that could contributeto student science learning. In this session, the participants shared severalissues and problems related to science teaching and learning from whichthey selected five (resources, teachers’ content knowledge, instructionalpractice, standards and curriculum alignment, and assessment) to becomethe focus of ongoing investigation. The issues reflected the context of theschool and the challenges the teachers faced regarding science instruction.During the session, teachers and the principal discussed openly the limitedscience instruction occurring in their school. They questioned their owninstructional practices, curriculum materials (FOSS kits), and their contentknowledge. At the end of the session, the teachers seemed a bit over-whelmed by the potential scope of evaluating the science program, althoughthey were still positive about moving forward with the process.

By the end of the third session, the CEC team decided that it was impor-tant to focus the initial evaluation on better understanding science curricu-lum and instruction in the school, as well as on teachers’ content knowledge.The team wanted to know more about what science topics were being taughtat each grade, the instructional approaches teachers were using, and opin-ions about the science curriculum. Another question that surfaced duringthe group discussion related to teachers’ content knowledge. The teacherson the CEC team believed that identifying their own strengths and weak-nesses regarding knowledge of science content was important for informinga plan of action to ultimately improve student achievement in science.Because the teachers were immersed in the evaluation process as collabora-tive participants, they were genuinely interested in investigating curricularand instructional issues in their school.

The teachers relied on the expertise of the university evaluators and grad-uate students to facilitate the process for selecting and designing data collec-tion methods and instruments. The CEC team discussed and debated thevalue of several methods for data collection, among them videotaping teach-ers as they taught science, a survey, individual teacher interviews, focus groupsessions, and classroom observations. The benefits and disadvantages of eachmethod were considered, particularly from the context of the teachers beingactive participants in an evaluation process. The team determined that design-ing and completing a survey to document the instructional methods teachers

COLLABORATIVE EVALUATION COMMUNITIES IN URBAN SCHOOLS 81

NEW DIRECTIONS FOR EVALUATION • DOI 10.1002/ev

Page 10: Collaborative evaluation communities in urban schools: A model of evaluation capacity building for STEM education

used to teach science and then using the information to engage in discussionand debate with their colleagues about teaching science was best for the ini-tial data collection. The next several work sessions were spent designing a sur-vey instrument specifically for the teachers in the school by reviewing,revising, and in some instances completely rewriting survey items frominstruments used in the university evaluators’ previous work. The teacherson the CEC team contributed as true partners in developing a science teach-ing and learning survey for their school. The most powerful aspect of this stepin the process was that it was teachers themselves who were evaluating sci-ence instruction and making decisions about how the data would be collected.The teachers began to think of themselves as primary users of evaluation todevelop shared values and norms and to use the evidence to change practice.

The CEC team decided that part of the plan of action would be toinventory the contents of their new science curriculum and become morefamiliar with the materials and investigations. The teachers also planned toconsult the school district curriculum pacing guide, as well as state sciencestandards, in an attempt to target specific grade levels to use specific kits.The teachers hoped that creating an inventory of content addressed in eachkit and materials available would be a first step toward better understand-ing how to use the kits in teaching science. It was also clear that the teach-ers needed help using the kits effectively. Another aspect of the plan was forthe university evaluators and graduate students to guide the teachersthrough the kit investigations, effectively encouraging reflection and dis-cussion about how to use the kits to teach science. As teachers in the schoolgrow more comfortable exploring instructional issues, the CEC team plansto initiate strategies for increasing collaborative lesson planning, findingways to observe colleagues teaching science, and examining effective sci-ence instruction in classrooms. Teachers in the school never (or rarely) par-ticipate in professional collaboration of this sort; through the plan of actionthe CEC team is pushing for change.

The action plan will require continuous monitoring, more data collec-tion, and analysis. This in turn is likely to lead to additional questions, otherdata collection, and further analysis, perhaps resulting in a continuousimprovement cycle that, through evaluation, may change the leadershipability of teachers in the school.

Concluding Remarks

Stockdill, Baizerman, and Compton (2002) state that the field needs toadvance the concept of ECB; the CEC project described here is a uniquemodel of evaluation capacity building in K–8 schools. The CEC modelextends the concept of ECB through provision of a collaborative immersionapproach that engages participants in the day-to-day activities of evaluation.Rather than focus on workshops and training institutes for participants, as

82 CRITICAL ISSUES IN STEM EVALUATION

NEW DIRECTIONS FOR EVALUATION • DOI 10.1002/ev

Page 11: Collaborative evaluation communities in urban schools: A model of evaluation capacity building for STEM education

is found in most NSF ECB efforts, the CEC project focuses on a mentor-assisted, grassroots, emergent approach to developing ECB that immersesthe participants in evaluation.

Although the CEC project has made progress in developing the evalua-tion capacity of K–8 schools, teachers, and graduate students, the project stillfaces many challenges in developing ECB in schools. One is the day-to-dayreality of life in an urban school. In one year, the CEC project has already facedteacher and principal turnover, difficulty with teacher buy-in, school financialstruggles, overcrowded classrooms, district curriculum changes, and teachingassignment changes. All of this makes it difficult to develop long-term com-mitment, maintain consistency in the project, and carry out institutional infra-structure changes. In many elementary schools, the teachers “loop” from yearto year, meaning they move up to the next grade level with the same group ofstudents for two years. This makes it challenging for teachers to develop in-depth understanding of the curriculum and instruction expectations from yearto year. Elementary teachers are also faced with preparation of multiple sub-jects and the challenging developmental needs of low-income children inurban schools. The additional responsibility of ECB (including collecting andanalyzing data, reflecting on results, and building infrastructure changes)makes it difficult to generate teacher buy-in. Lack of time is a major barrier tofully developing the evaluation capacity of teachers and schools.

King (2005) noted that engaging teachers in research and evaluationactivities is one of the most demanding aspects of ECB in schools. Kingstated that “the ideal would be for groups of collaborating teachers andstaff to institute action research efforts on specific interventions with spe-cific students to raise test scores” (p. 96); however, achieving this ideal isextremely difficult.

The open-ended nature of the immersion approach to ECB can also bedaunting for participants. Teachers reported finding it difficult to narrowtheir evaluation choices to one or two topics. It was like opening a flood-gate as teachers discussed all the concerns and problems they face teachingscience or mathematics in an urban school. Their concerns consistentlyaddressed issues of curriculum and instruction. Because curricular decisionsare not easily changed, teachers often felt powerless in the decision-makingprocess. The district often chose the curriculum for them and developed apacing guide teachers were required to follow. The feeling of powerlessnesswas evident at the outset of the CEC project. Having participants shift theirfocus to issues they can actually control, such as instructional techniquesand student achievement, helped give the teachers a feeling of ownershipover the process. In the end, focusing on improving student achievementhas thus far proven to be sufficient motivation for teachers to investigate theteaching and learning of science and mathematics.

Graduate students have also encountered many obstacles in collabora-tive evaluation communities. As important participants in the process, the

COLLABORATIVE EVALUATION COMMUNITIES IN URBAN SCHOOLS 83

NEW DIRECTIONS FOR EVALUATION • DOI 10.1002/ev

Page 12: Collaborative evaluation communities in urban schools: A model of evaluation capacity building for STEM education

graduate students needed to be self-starters who were able to take initiative.Most evaluation internships afford graduate students direct mentoring, butthe graduate students in the CEC project were immersed in a project thatunfolded as a team process. All participants had to maintain a level of flex-ibility as they worked to engage in evaluation. The graduate students in theCEC project are mentored by the faculty; however, it is an unusual type ofmentorship. In the CEC project, the goal is to help the graduate studentsnavigate through a complex, collaborative team process.

Overall, the CEC model of ECB presents teachers, education com-munities, and graduate students with a unique approach to ECB. Thosewho engage in ECB in K–12 schools know that schools are a tough en-vironment in which to gain buy-in and participation. Motivation is essential for participation. One key feature of the CEC model is the col-laborative immersion approach, controlled and led by all participantsrather than by the leadership in the school or the university. This is essen-tial to building a grassroots effort. It is important for the teachers to seethis as a vital activity they lead, not an activity they are told to do. In thefuture, we will find out whether such an approach results in long-terminfrastructure changes. For now, all indications are that the CEC processcan develop the ECB of schools, teachers, and graduate students in waysthat not only improve the ability to engage in evaluation and use data tomake decisions but also lead to improved achievement in science andmathematics for their students.

References

American Educational Research Association (AERA) Grants Program. “AERA/NSFEvaluation Training Program.” Nov. 2002 (http://www.aera.net/grantsprogram/sub-web/ETPFly.html).

Bissex, G. L. “Teacher Research: Seeing What We Are Doing.” In T. Shanahan (ed.),Teachers Thinking, Teachers Knowing. Urbana, Ill.: National Council of Teachers ofEnglish, 1994.

Cochran-Smith, M., and Lytle, S. Inside/Outside: Teacher Research and Knowledge. NewYork: Teachers College Press, 1993.

Cochran-Smith, M., and Lytle, S. “Relationships of Knowledge and Practice: TeacherLearning in Communities.” Review of Research in Education, 1999, 24, 249–305.

Elmore, R., and Burney, D. “Investigations in Teacher Learning.” In L. Darling-Hammond and G. Sykes (eds.), Teaching as the Learning Profession: Handbook of Policyand Practice. San Francisco: Jossey-Bass, 1999.

Fullan, M. “Why Teachers Must Become Change Agents.” Educational Leadership, 1993,5(6), 12–17.

Hawley, W. D., and Valli, L. “The Essentials of Effective Professional Development: ANew Consensus.” In L. Darling-Hammond and G. Sykes (eds.), Teaching as theLearning Profession: Handbook of Policy and Practice. San Francisco: Jossey-Bass, 1999.

Huffman, D., and Kalnin, J. “Collaborative Inquiry to Make Databased Decisions inSchools.” Teaching and Teacher Education, 2003, 19(6), 569–580.

84 CRITICAL ISSUES IN STEM EVALUATION

NEW DIRECTIONS FOR EVALUATION • DOI 10.1002/ev

Page 13: Collaborative evaluation communities in urban schools: A model of evaluation capacity building for STEM education

Kalnin, J. “Teachers Learning: A Cooperative Research Group in Action.” Unpublisheddoctoral dissertation, University of California, Berkeley, 2000.

King, J. A. “Building the Evaluation Capacity in a School District.” In D. W. Compton,M. Baizerman, and S. Stockdill (eds.), The Art, Craft and Science of Evaluation CapacityBuilding. New Directions for Evaluation, no. 93. San Francisco: Jossey-Bass, 2002.

King, J. A. “A Proposal to Build Evaluation Capacity at the Bunche-Da Vinci LearningPartnership Academy.” In M. Alkin and C. A. Christie (eds.), Theorists’ Models inAction. New Directions for Evaluation, no. 106. San Francisco: Jossey-Bass, 2005.

Little, J. W. “Organizing Schools for Teacher Learning.” In L. Darling-Hammond and G.Sykes (eds.), Teaching as the Learning Profession: Handbook of Policy and Practice. SanFrancisco: Jossey-Bass, 1999.

Loucks-Horsley, S., Hewson, P., Love, N., and Stiles, K. Designing Professional Develop-ment for Teachers of Science and Mathematics. Thousand Oaks, Calif.: Corwin Press,1997.

Parsons, B. A. Evaluative Inquiry: Using Evaluation to Promote Student Success. ThousandOaks, Calif.: Corwin Press, 2002.

Sarason, S. B. Barometers of Change: Individual, Educational, and Social Transformation.San Francisco: Jossey-Bass, 1996.

Sarason, S. B., and Lorentz, E. M. Crossing Boundaries: Collaboration, Coordination, andthe Redefinition of Resources. San Francisco: Jossey-Bass, 1998.

Stockdill, S. H., Baizerman, M., and Compton, D. W. “Toward a Definition of the ECBProcess: A Conversation with the ECB Literature.” In D. W. Compton, M. Baizerman,and S. Stockdill (eds.), The Art, Craft and Science of Evaluation Capacity Building. NewDirections for Evaluation, no. 93. San Francisco: Jossey-Bass, 2002.

DOUGLAS HUFFMAN is associate professor of science education in the School ofEducation at the University of Kansas.

FRANCES LAWRENZ is Wallace Professor of Teaching and Learning in the Collegeof Education and Human Development at the University of Minnesota.

KELLI THOMAS is assistant professor of mathematics education in the School ofEducation at the University of Kansas.

LESA CLARKSON is assistant professor of mathematics education in the Collegeof Education and Human Development at the University of Minnesota.

COLLABORATIVE EVALUATION COMMUNITIES IN URBAN SCHOOLS 85

NEW DIRECTIONS FOR EVALUATION • DOI 10.1002/ev