the challenges of making evaluation useful · michael quinn patton, ph.d* ensaio: aval. pol. públ....

11
The evaluation profession has been studying ways of increasing the use of evaluations. Here are some of the things we’ve learned are important for evaluations to be useful: being clear about intended uses by primary intended users; creating and nurturing a results-oriented, reality- testing culture that supports evaluation use; collaboration in deciding what outcomes to commit to and hold yourselves accountable for; making measurement of outcomes and program processes thoughtful, meaningful, timely, and credible – and integrated into the program; using the results in a visible and transparent way, and modeling for others serious use of results. Trecho Extraído do Artigo A avaliação como profis- são vem estudando meios de ampliar a utilização das ava- liações. Aqui são apresenta- das algumas lições aprendidas como importantes, para ava- liações serem úteis: clareza sobre quais são as utilizações pretendidas pelos principais usuários pretendidos; criação e fortalecimen- to de uma cultura de orientação de resulta- The Challenges of Making Evaluation Useful Michael Quinn Patton, Ph.D * Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005 Michael Quinn Patton, Ph.D Faculty member of the Union Institute Graduate School, Minneapolis, MN, USA Founder and Director of an organizational development consulting business “Utilization - Focused Evaluation” Former President of the American Evaluation Association Excerpt Página Aberta * Michael Quinn Patton é doutor em sociologia pela Universidade de Wisconsin, Estados Unidos, consultor de avaliação há 30 anos, dirige atualmente a Utilization-Focused Evaluation, empresa de consultoria em desenvolvimento organizacional. Avaliou uma série de programas em áreas tão diversas quanto saúde, serviços humanos, educação, cooperação, meio ambiente, agricultura, emprego, treinamento, desenvolvimento de liderança, alfabetização, infância e educação para pais, redução da pobreza, desenvolvimento econômico e advocacia. Sua prática em consultoria inclui avaliação de programas, planejamento estratégico, resolução de conflitos, desenvolvimento de pessoal e uma variedade de projetos de desenvolvimento organizacional. É autor de vários livros de referência sobre avaliação de programas, incluindo Utilization-Focused Evaluation: the new century text (1977) e Qualitative evaluation and research methods (2001).

Upload: others

Post on 14-May-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The Challenges of Making Evaluation Useful · Michael Quinn Patton, Ph.D* Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005 Michael Quinn Patton,

The evaluation profession has beenstudying ways of increasing the use ofevaluations. Here are some of the thingswe’ve learned are importantfor evaluations to be useful:being clear about intendeduses by primary intendedusers; creating and nurturinga results-oriented, reality-testing culture that supportsevaluation use; collaborationin deciding what outcomes tocommit to and holdyourselves accountable for;making measurement ofoutcomes and programprocesses thoughtful, meaningful, timely,and credible – and integrated into the

program; using the results in a visible andtransparent way, and modeling for othersserious use of results.

Trecho Extraídodo Artigo

A avaliação como profis-são vem estudando meios deampliar a utilização das ava-liações. Aqui são apresenta-das algumas lições aprendidascomo importantes, para ava-liações serem úteis: clarezasobre quais são as utilizaçõespretendidas pelos principais

usuários pretendidos; criação e fortalecimen-to de uma cultura de orientação de resulta-

The Challenges of

Making Evaluation Useful

Michael Quinn Patton, Ph.D *

Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005

Michael Quinn Patton, Ph.D

Faculty member of the Union

Institute Graduate School,

Minneapolis, MN, USA

Founder and Director of an

organizational development

consulting business “Utilization

- Focused Evaluation”

Former President of the

American Evaluation

Association

Excerpt

Página Aberta

* Michael Quinn Patton é doutor em sociologia pela Universidade de Wisconsin, Estados Unidos, consultor de avaliação há 30anos, dirige atualmente a Utilization-Focused Evaluation, empresa de consultoria em desenvolvimento organizacional. Avaliouuma série de programas em áreas tão diversas quanto saúde, serviços humanos, educação, cooperação, meio ambiente,agricultura, emprego, treinamento, desenvolvimento de liderança, alfabetização, infância e educação para pais, redução dapobreza, desenvolvimento econômico e advocacia. Sua prática em consultoria inclui avaliação de programas, planejamentoestratégico, resolução de conflitos, desenvolvimento de pessoal e uma variedade de projetos de desenvolvimento organizacional.É autor de vários livros de referência sobre avaliação de programas, incluindo Utilization-Focused Evaluation: the new century text(1977) e Qualitative evaluation and research methods (2001).

Page 2: The Challenges of Making Evaluation Useful · Michael Quinn Patton, Ph.D* Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005 Michael Quinn Patton,

68 Michael Quinn Patton, Ph.D

Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005

dos e constatação da realidade que apóie autilização da avaliação; colaboração nasdecisões sobre com quais resultados se com-prometer e se responsabilizar; garantia deque a verificação dos resultados e do proces-so de desenvolvimento do programa seja re-flexiva, significativa, oportuna, confiável eintegrada ao programa; utilização dos resul-tados de um modo visível e transparente ser-vindo como exemplo para outras utilizaçõesresponsáveis de resultados.

As we gather today here in Rio de Janeiro toconsider the challenges of making evaluationuseful, a good place to begin is with thenobservation that Evaluation, as a profession, hasbeen internationalized. In 1995 evaluationprofessionals from 61 countries around the worldcame together at the First International EvaluationConference in Vancouver, British Columbia.Evaluation associations from the United States,Canada, the United Kingdom, and Australia/New Zealand sponsored that conference. In 2005the Second International Evaluation Conferencewill be held in Toronto. In the last 10 years morethan 40 new national evaluation associationsand networks have emerged worldwide includingthe European Evaluation Society, the AfricanEvaluation Society, the International Organizationfor Cooperation in Evaluation, and theInternational development EvaluationAssociation. Brazil has its own BrazilianEvaluation Network and in October of this yearthe Latin American Evaluation Association willbe launched in Peru.

These globally interconnected effortsmade it possible for information to be sharedabout strategies for and approaches toevaluation worldwide. Such internationalexchanges also offered alternativeevaluation criteria. Thus, the globalizationof evaluation supports our working together

to increase our international understandingabout factors that support programeffectiveness and evaluation use.

The international nature of evaluationposes significant cross-cultural challengesin determining how to integrate and connectevaluation to local issues and concerns.One of the activities being taken on by newnational evaluation associations is reviewof the North American Standards forevaluation. Those standards call forevaluations to be useful, practical, ethical,and accurate. Annex (see the end of thespeech) the summary of these standards.

Fear of Evaluation andResistance to UsingEvaluation

Since the standards call for evaluationsto be useful, we are faced with the challengeof how to conduct evaluations in such away that they are useful. Resistance is socommon that evaluators have created theirown version of the Genesis story to describethis challenge. It goes like this:

In the beginning God created theheaven and the earth.And God saw everything that hemade. “Behold,” God said,”it is verygood.” And the evening and themorning were the sixth day.And on the seventh day God restedfrom all His work. His archangelcame then unto Him asking, “God,how do you know that what you havecreated is ‘very good’? What areyour criteria? On what data do youbase your judgment? Just exactlywhat results were you expecting toattain? And aren’t you a little close

Page 3: The Challenges of Making Evaluation Useful · Michael Quinn Patton, Ph.D* Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005 Michael Quinn Patton,

The Challenges of Making Evaluation Useful 69

Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005

to the situation to make a fair andunbiased evaluation?”God thought about these questionsall that day and His rest was greatlydisturbed. On the eighth day Godsaid, “Lucifer, go to hell.”Thus was evaluation born in a blazeof glory [ . . .].

This is what we would call a “summativeevaluation” in the language of evaluation,that is, an evaluation aimed at an overalljudgment of merit or worth. The traditionalform of evaluation has involved havingindependent, external evaluators judge theeffectiveness of a program, what we havecome to call “summative evaluation”Summative evaluations are aimed at makingdecisions about which projects to continue,which to change, and which to terminate.

Formative EvaluationA second type of evaluation is alsovery important, what we call “formativeevaluation” or “evaluation forimprovement.” — an approach toevaluation that emphasizes learning,improvement, and identification ofstrengths and weaknesses, especiallyfrom the perspective of projectbeneficiaries. In such an approachto evaluation the staff, beneficiaries,and evaluator work togethercollaboratively to learn how to be moreeffective and to change things thatare not working effectively. For a longtime I’ve been in search of a creationstory that had a formative element toit, and it wasn’t until I came here toNew Zealand that I found that story.The Maori story of creation tells ofRangi the Sky Father and Papa theEarth Mother, existing intertwined and

having their children between them,close together that their fierceembrace shut out the light and sotheir children lived in darkness.The god-children became disgruntledwith the darkness and conspired toseparate their parents, to separate theSky Father from the Earth Mother soas to allow the light and the wind tocome in, and they did so.Tane who was the god of the forest,of the animals, of the insects andbirds, and all living things, was themost powerful of the god-children. Itwas finally his strength, added to thatof his brothers that managed toseparate the Sky Father from the EarthMother, and upon separating them,they saw them in the light for the firsttime. They saw Rangi the Sky Fatherwith tears in his eyes, that becamethe rain and Papa the Earth Motherin her nakedness.

And so Tane decided to clothe anddecorate his mother and beganplanting trees in the earth mother toadorn her, but because he wasyoung and inexperienced and stilllearning, he planted the trees wrongside round. He put the leaves in theearth, instead of the roots. When hehad done this he stood back andlooked at his handiwork and saw thatno birds came and that no animalscame and that it was not verybeautiful. So he reflected on this,went back , took the trees out, turnedthem around and planted them rootsfirst, with the leaves in the air. Andimmediately the birds appeared andthe animals and people came out tolive beneath the trees.

Page 4: The Challenges of Making Evaluation Useful · Michael Quinn Patton, Ph.D* Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005 Michael Quinn Patton,

70 Michael Quinn Patton, Ph.D

Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005

And so here we have a creation story offormative evaluation, of trying something out,seeing whether or not it works, and whenfinding that it did not work, changing thepractice to make it work. The first formativeevaluation creation story I’ve encountered.

Formative evaluation is usually done incollaboration with program staff whilesummative evaluation is typically doneindependently and externally. The staff withina program may undertake formativeevaluation for internal improvement, but foraccountability purposes externally, asummative evaluation would be conductedindependently. Formative evaluation is alsodone to get ready for summative evaluation.

One of the lessons that the field ofevaluation has learned is that new andinnovative projects need time to develop beforethey are ready for summative evaluation.Damage can be done to such projects ifexternal evaluation is done too soon. So, thetypical strategic approach is to engage informative evaluation to learn and improve,and then at a certain point when the project isready for a more rigorous, independentprocess, to undertake a summative evaluation.

Another distinction important to evaluationis to separate personnel evaluation (evaluationof staff performance) from program or projectevaluation (evaluation of project effectiveness).These too are quite different approaches toevaluation. One of the challenges to usefulevaluation is helping intended users understandthe different kinds of evaluations and the differentways to use them appropriately.

Reality-testingAll serious evaluation requires an attitude

that we have come to call “a willingness to

engage in reality-testing.” Reality-testingmeans the willingness to look at what is reallygoing on and what is actually happening, notjust what we hope is happening.

We know from the fields of psychologyand other social sciences that as humanbeings have a tendency to believe what wewant to believe. We have a tendency to seethings in positive ways because we have highhopes that what we are doing is good. Butwe also have the capacity to fool ourselves,that is, to convince ourselves that good thingsare happening when, in fact, they are not.Thus, the mental mind-set for evaluation isthe willingness to face reality. Themechanisms, procedures, and methods ofevaluation are aimed at helping us test reality.This kind of reality-testing mentality must beestablished within programs so that staff,directors, and funders of programs are willingto admit and learn from mistakes. We havea saying in United States that we learn themost from our mistakes, not from oursuccesses. In order to learn from our mistakeswe must be able to recognize and admit whenwe have made mistakes. Evaluationcontributes by helping us identify what isworking am what is not working, and then tochange things that are not working.

Evaluations help overcome realitydistortion. Rio de Janeiro takes its name froma reality distortion. The original Portugueseexplorers thought they had sailed into a riverwhen they entered what is now Rio de Janeiro.Instead, it turned out to be a harbor, but thename stuck – an inaccurate name.

One of the important contributions ofevaluation is to help program staff andplanners move beyond the reality distortionsof personal “intuition.” Many times when I

Page 5: The Challenges of Making Evaluation Useful · Michael Quinn Patton, Ph.D* Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005 Michael Quinn Patton,

The Challenges of Making Evaluation Useful 71

Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005

am interviewing staff about how they knowthat intended beneficiaries are really beinghelped, they reply: “Because I feel people arebeing helped.” That is reliance on “intuition,”but the challenge of evaluation is to gobeyond “intuition” by establishing concreteevidence that people are being helped.

Let me share an example. I had theopportunity in Japan to visit the MoriokaCitizen’s Welfare Bank which was engaged ina project to collect used clothing and usedhousehold goods from Japanese citizens tosend to needy people in the Philippines. Theyhad collected a large amount of discardedgoods for donation to the Philippines.However, when this initiative was implemented,they found that many of the poor people inthe Philippines were insult to by being givendiscarded clothing and goods from Japan.For them, it was a loss of face to receivesomething that Japanese people no longerwanted. This kind of charity was experiencedas demeaning. Thus, this very kind idea, thisvery good idea for helping fellow citizens inanother country, simply did not work inpractice. It took great insight and courage, Ibelieve, to admit that this project was notworking as originally designed and to changeit. The project was redesigned as a technicalassistance and development effort to help thepeople in the Philippines design their owngoodwill program to collect and distribute usedclothing and goods within the Philippines.Thus, this project changed from one ofdistributing used Japanese goods to one ofbuilding capacity within the Philippines for thecollection and distribution of used goods. Thisis an example of formative evaluation, of thewillingness of the people within an NPOprogram to look honestly and courageouslyat what was working and what was notworking, and to change what was not working.

The final approach to evaluation I wouldlike to mention is called “Knowledge-Generating Evaluation” or “Evaluation ToLearn Lessons.” Both summative andformative evaluation are aimed at assessingthe effectiveness of specific, individualprojects. In contrast, knowledge-generatingevaluation looks for patterns of effectivenessacross many different projects in order tolearn general lessons about what works anddoesn’t work. For example, if we doevaluations of many different recyclingprojects, we can bring together the findingsfrom those different projects to learn lessonsabout what is most effective in undertakingrecycling initiatives. Several philanthropicfoundations in United States have made thisform of evaluation a priority for futureevaluative studies.

Major Challenges ofMaking Evaluation Useful

The evaluation profession has beenstudying ways of increasing the use ofevaluations. Here are some of the thingswe’ve learned are important for evaluationsto be useful.

• Being clear about intended uses byprimary intended users.

• Creating and nurturing a results-oriented, reality-testing culture that supportsevaluation use.

• Collaboration in deciding whatoutcomes to commit to and hold yourselvesaccountable for.

• Making measurement of outcomes andprogram processes thoughtful, meaningful,timely, and credible – and integrated intothe program.

• Using the results in a visible andtransparent way, and modeling for othersserious use of results.

Page 6: The Challenges of Making Evaluation Useful · Michael Quinn Patton, Ph.D* Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005 Michael Quinn Patton,

72 Michael Quinn Patton, Ph.D

Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005

Utilization-FocusedEvaluation

One major approach to evaluation isUtilization-Focused Evaluation (PATTON,1997), which builds on the observationsabove. It is designed to address thesechallenges. Utilization-Focused Evaluation (U-FE) begins with the premise that evaluationsshould be judged by their utility and actualuse; therefore, evaluators should facilitate theevaluation process and design any evaluationwith careful consideration of how everythingthat is done, from beginning to end, will affectuse. Use concerns how real people in the realworld apply evaluation findings and experiencethe evaluation process. Therefore, the focusin utilization-focused evaluation is on intendeduse by intended users. Since no evaluationcan be value-free, utilization-focusedevaluation answers the question of whosevalues will frame the evaluation by workingwith clearly identified, primary intended userswho have responsibility to apply evaluationfindings and implement recommendations.

Utilization-focused evaluation is highlypersonal and situational. The evaluationfacilitator develops a working relationshipwith intended users to help them determinewhat kind of evaluation they need. Thisrequires negotiation in which the evaluatoroffers a menu of possibilities within theframework of established evaluationstandards and principles.

Utilization-focused evaluation does notadvocate any particular evaluation content,model, method, theory, or even use. Rather,it is a process for helping primary intendedusers select the most appropriate content,model, methods, theory, and uses for theirparticular situation.

Situational responsiveness guides theinteractive process between evaluator andprimary intended users. A utilization-focused evaluation can include anyevaluative purpose (formative, summative,developmental), any kind of data(quantitative, qualitative, mixed), any kindof design (e.g., naturalistic, experimental),and any kind of focus (processes, outcomes,impacts, costs, and cost-benefit, amongmany possibilities). Utilization-focusedevaluation is a process for making decisionsabout these issues in collaboration with anidentified group of primary users focusingon their intended uses of evaluation.

A psychology of use undergirds andinforms utilization-focused evaluation: intended users are more likely to useevaluations if they understand and feelownership of the evaluation process andfindings; they are more likely to understandand feel ownership if they’ve been activelyinvolved; by actively involving primaryintended users, the evaluator is training usersin use, preparing the groundwork for use,and reinforcing the intended utility of theevaluation every step along the way.

SituationalResponsiveness

Situational factors that affect evaluationdesign and use include program variables(e.g., size, complexity, history) evaluationpurposes, (formative, summative), evaluatorexperience and credibility, intended users,politics, and resource constraints. Anevaluator demonstrates situationalresponsiveness when strategizing howvarious factors affect evaluation design anduse. The implication is that no one bestevaluation design exists, that is, no

Page 7: The Challenges of Making Evaluation Useful · Michael Quinn Patton, Ph.D* Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005 Michael Quinn Patton,

The Challenges of Making Evaluation Useful 73

Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005

standardized cookie cutter approach can beapplied regardless of circumstances andcontext. The standards and principles ofevaluation provide direction, but everyevaluation is unique.

Situational responsiveness involvesnegotiating and designing the evaluationto fit the specific intended uses of theevaluation by particular intended users.

Evaluative-thinking as aMethodology and Tool

We tend to think of methodology asreferring to techniques of data collection andanalysis. Mixed methods evaluators oftenuse the metaphor of a “tool kit” to remindevaluators to pick the right tool for the job.Experienced tool users remind us that havingonly one tool is both dangerous and limiting:famously, if all you have is off hammer,everything looks like a nail. But the focus onevaluation as a source of applied socialscience tools or methods remains limited.

Webster’s New World Dictionary (1995)defines methodology as “the science ofmethod, or orderly arrangement; specifically,the branch of logic concerned with theapplication of the principles of reasoning toscientific and philosophical inquiry.”

This definition directs our attentionbeyond data collection and analysistechniques to evaluative thinking.

Evaluative Thinking

Action Reflection

Evaluative thinking is one way to thinkabout the connection between action andreflection. In order to reflect on action,program staff and social innovators mustalso know how to integrate and usefeedback, research, and experience, that is,to weigh evidence, consider inevitablecontradictions and inconsistencies, articulatevalues, interpret findings, and examineassumptions, to note but a few of the thingsmeant by “thinking evaluatively.”

The capacity to think astutely is oftenundervalued in the world of action and takenfor granted in the world of scholars. In contrast,Philosopher Hannah Arendt (1968) identifiedthe capacity to think as the foundation of ahealthy and resilient democracy. Havingexperienced totalitarianism, then having fled it,she devoted much of her life to studying it andits opposite, democracy. She believed thatthinking thoughtfully in public deliberations andacting democratically were intertwined.Totalitarianism is built on and sustained by deceitand thought control. In order to resist efforts bythe powerful to deceive and control thinking,Arendt believed that people needed to practicethinking. Toward that end she developed “eightexercises in political thought.” She wrote that“experience in thinking... can be won, like allexperience in doing something, only throughpractice, through exercises.”

From this point of view, might we considerevery evaluation an opportunity for thoseinvolved to practice thinking? In this regardwe might aspire to have evaluation of socialprograms do what Arendt (1968, p. 14-15)hoped her exercises in political thoughtwould do, namely, “to gain experience inhow to think,” in this case, how to thinkabout and evaluate the complex dynamicsof social innovation in order to learn and

Page 8: The Challenges of Making Evaluation Useful · Michael Quinn Patton, Ph.D* Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005 Michael Quinn Patton,

74 Michael Quinn Patton, Ph.D

Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005

increase impact. Her exercises “do notcontain prescriptions on what to think orwhich truths to hold,” but rather on the actand process of thinking. For example, shethought it important to help people thinkconceptually, to “discover the real originsof original concepts in order to distill fromthem anew their original spirit which has sosadly evaporated from the very keywords ofpolitical language— such as freedom andjustice, authority and reason, responsibilityand virtue, power and glory—leavingbehind empty shells [...]”. Might we add toher conceptual agenda for examination andpublic dialogue such terms as outcomes andimpacts, and accountability and learning,among many possibilities.

Helping people learn to thinkevaluatively can be a more enduring impactfrom an evaluation than use of specificfindings generated in that same evaluation.Findings have a very short ‘half life’ - touse a physical science metaphor. Theydeteriorate very quickly as the world changesrapidly. Specific findings typically have asmall window of relevance. In contrast,learning to think and act evaluatively canhave an ongoing impact. The experienceof being involved in an evaluation, then,for those actually involved, can have alasting impact on how they think, on theiropenness to reality-testing, on how they viewthe things they do, and on their capacity toengage in innovative processes.

Discovering Process UseTrying to figure out what’s really going on

is, of course, a core function of evaluation.Part of such reality testing includes sorting outwhat our profession has become and isbecoming, what our core disciplines are, andwhat issues deserve our attention. I have

spent a good part of my evaluation careerreflecting on such concerns, particularly fromthe point of view of use, for example, how towork with intended users to achieve intendeduses, and how to distinguish the generalcommunity of stakeholders from primary usersso as to work with them. In all of that work,and indeed through the first two editions ofUtilization-Focused Evaluation (a periodspanning 20 years), I’ve been engaging inevaluations with a focus on enhancing utility,both the amount and quality of use. But, whenI went to prepare the third edition of the book,and was trying to sort out what had happenedin the field in the ten years since the last edition,it came to me that I had missed something.

I was struck by something that my ownmyopia had not allowed me to see before.When I have followed up my ownevaluations over the years, I have enquiredfrom intended users about actual use. WhatI would typically hear was something like:“Yes, the findings were helpful in this wayand that, and here’s what we did with them.”If there had been recommendations, I wouldask what subsequent actions, if any, followed.But, beyond the focus on findings andrecommendations, what they almostinevitably added was something to the effectthat “it wasn’t really the findings that wereso important in the end, it was goingthrough the process.” And I would reply:“That’s nice. I’m glad you appreciated theprocess, but what did you really do with thefindings?” In reflecting on theseinteractions, I came to realise that the entirefield has narrowly defined use as use offindings. We have thus not had ways toconceptualise or talk about what happensto people and organisations as a result ofbeing involved in an evaluation process:what I have come to call ‘process use’.

Page 9: The Challenges of Making Evaluation Useful · Michael Quinn Patton, Ph.D* Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005 Michael Quinn Patton,

The Challenges of Making Evaluation Useful 75

Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005

The Impacts ofExperiencing the Cultureof Evaluation

I have defined process use as relating toand being indicated by individual changesin thinking and behaving that occur amongthose involved in evaluation as a result ofthe learning that occurs during theevaluation process. Changes in program ororganizational procedures and culture mayalso be manifestations of process impacts.

One way of thinking about process useis to recognize that evaluation constitutes aculture, of sorts. We, as evaluators, haveour own values, our own ways of thinking,our own language, our own hierarchy, andour own reward system. When we engageother people in the evaluation process, weare providing them with a cross-culturalexperience. They often experience evaluatorsas imperialistic, that is, as imposing theevaluation culture on top of their own valuesand culture - or they may find the cross-cultural experience stimulating and friendly.But in either case, and all the spaces in-between, it is a cross-cultural interaction.

Those new to the evaluation culture mayneed help and facilitation in coming to viewthe experience as valuable. One of the waysI sometimes attempt to engage people inthe value of evaluation is to suggest thatthey may reap personal and professionalbenefits from learning how to operate in anevaluation culture. Many funders areimmersed in that culture. Knowing how tospeak the language of evaluation andconceptualize programs logically are notinherent ‘goods,’ but can be instrumentallygood in helping people get the things they

want, not least of all, to attract resources fortheir programs. They may also develop skillsin reality-testing that have application in otherareas of professional and even personal life.

This culture of evaluation, that we asevaluators take for granted in our own wayof thinking, is quite alien to many of thefolks with whom we work at program levels.Examples of the values of evaluation include:clarity, specificity, and focusing; beingsystematic and making assumptions explicit;operationalizing program concepts, ideasand goals; distinguishing inputs andprocesses from outcomes; valuing empiricalevidence; and separating statements of factfrom interpretations and judgments. Thesevalues constitute ways of thinking that arenot natural to people and that are quite aliento many. When we take people through aprocess of evaluation - at least in any kindof stakeholder involvement or participatoryprocess, they are in fact learning thingsabout evaluation culture and often learninghow to think in these ways. Recognizingthis leads to the possibility of conceptualizingsome different kinds of process uses, andthat’s what I want to turn to now

Learning to ThinkEvaluatively

“Process use,” as I’ve said, refers to usingevaluation logic and processes to helppeople in programs and organizations learnto think evaluatively. This is distinct fromusing the substantive findings in anevaluation report. It’s equivalent to thedifference between learning how to learnversus learning substantive knowledgeabout something. Learning how to thinkevaluatively is learning how to learn. I thinkthat facilitating learning about evaluation

Page 10: The Challenges of Making Evaluation Useful · Michael Quinn Patton, Ph.D* Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005 Michael Quinn Patton,

76 Michael Quinn Patton, Ph.D

Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005

opens up new possibilities for positioningthe field of evaluation professionally. It is akind of process impact that organizationsare coming to value because the capacityto engage in this kind of thinking has moreenduring value than a delimited set offindings, especially for organizationsinterested in becoming what has come tobe called popularly “learningorganizations.” Findings have a very short‘half life’ - to use a physical sciencemetaphor. They deteriorate very quickly asthe world changes rapidly. Specific findingstypically have a small window of relevance.In contrast, learning to think and actevaluatively can have an ongoing impact.The experience of being involved in anevaluation, then, for those stakeholdersactually involved, can have a lasting impacton how they think, on their openness toreality-testing, and on how they view thethings they do. This is one kind of processuse - learning how to think evaluatively.

Reality-Testing andEvaluation

My host and Guardian Angel while I’vebeen in Brazil has been Thereza Penna Firme.She loves stories and metaphors for evaluation,so as a way of acknowledging her contributionsand thanking her for her hospitality, I want toclose with an evaluation story I know she likes,one from literature, the story of Don Quixote,the Man of La Mancha. In the story, DonQuixote, in his old age, loses touch with ordinaryreality and conceives the project of becominga Knight Errant, riding through the world makingthe world a better place. As the story unfolds, itbecomes a story of different realities. DonQuixote thinks he is fighting a great army, otherssee it as a mere herd of sheep. Don Quixotethinks he is fighting a giant, while others see itas a windmill. Don Quixote thinks he is saving

a fair damsel while others see her as acommon prostitute. Near the end, tricked byhis son-in-law and the village priest to lookat his own reflection in a mirror, Don Quixoteenters back into the ordinary reality of thosearound him. But the disappointmentdemoralizes and exhausts him. Knowing thathe is dying, those around him attempt tocomfort him by assuring him that at least hewill die in touch with reality. He respondswith one of the great soliloquies of theater.

Reality. Life is it is. I’ve lived many

years now and I’ve seen life as it is:

pain, misery cruelty beyond belief.I’ve heard the voices of God’s noblest

creatures moan from the filth in the

streets. I’ve been a soldier and a slave.

I’ve seen my comrades die in battleor fall more slowly under the lash in

Africa. These were men who saw life

is it is and they died despairing. No

glory. No bray of last words. Onlythe look of despair in their eyes

questioning “Why?” I do not believe

they were asking why they were dying,but why they had ever lived.

Who knows where madness lies?

Perhaps to be too practical is

madness,to seek treasure where there is only trash,

to surrender one’s dreams, this may

be madness.

But maddest of all is to see life onlyas it is and not also is it should be

and could be.

Evaluation challenges us to do realitytesting but not as an end in itself. Weexamine reality so as not to deceive ourselvesand to better direct our efforts at effectivelycreating life as it should be and could befor the intended beneficiaries of programs.

Page 11: The Challenges of Making Evaluation Useful · Michael Quinn Patton, Ph.D* Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005 Michael Quinn Patton,

The Challenges of Making Evaluation Useful 77

Ensaio: aval. pol. públ. Educ., Rio de Janeiro, v.13, n.46, p. 67-78, jan./mar. 2005

Referências bibliográficasARENDT, H. Between past and future: eight exercises in political thought. New York: VikingPress, 1968.

NEUFELDT, V. (Ed.). Webster’s new world dictionary. New York: London: Pocket Star Books,1995.

PATTON, M. Q. Qualitative research and evaluation methods. 3rd. ed. Thousand Oaks,Calif.: Sage Publications, 2001.

______. Utilization-focused evaluation: the new century text. 3rd ed. Thousand Oaks,Calif.: Sage Publications, 1997.

ANNEXSTANDARDS FOR EVALUATION

UTILITYThe Utility Standards are intended to ensure that an evaluation will serve the practical

information needs of intended users.

FEASIBILITYThe Feasibility Standards are intended to ensure that an evaluation will be realistic,

prudent, diplomatic, and frugal.

PROPRIETYThe Propriety Standards are intended to ensure that an evaluation will be conducted

legally, ethically, and with due regard for the welfare of those involved in the evaluation, aswell as those affected by its results.

ACCURACYThe Accuracy Standards are intended to ensure that an evaluation will reveal and con-

vey technically adequate information about the feature that determine worth or merit of theprogram being evaluated.

For the full set of the detailed standards, see: http://www.wmich.edu/evalctr/jc/