Alles, was die praktische Planung, Durchführung und Nutzung von Evaluationen betrifft. Potentiell können hier also Entscheider und Nutzer, (angehende) Evaluatoren, aber auch sonstige Akteure und Interessierte Informationen dazu finden, wie man Evaluationen plant, durchführt und nutzt.
Subject: Evaluation budget as a %% of program expenditure
Date: Sat, 27 Nov 2004 14:04:15 +1000
From: Sonia Whiteley
I realise this is a topic that comes up on almost an annual basis (Sept 2003 at last count) but I was wondering whether there were any new thoughts on the issue.
I'm more interested in large-scale programs - at least $5 million plus Aussie $$s - where evaluation is built into the program from day 1 (ie as the program is being carefully crafted from policy).
What percentage of the program (not the organisational) budget should be allocated to evaluation?
What actually happens in the real world of program evalution budgets?
Does this differ across across departments/areas of responsibility? Or more specifically, are health department budgets, for example, generally larger percentagewise than those from education or the environment?
Any pointers to recent references or case studies would be greatly appreciated.
Many thanks
Sonia Whiteley
EVALTALK - American Evaluation Association (AEA) Discussion List. See also
the website: http://www.eval.org
To unsubscribe from EVALTALK, send e-mail to listserv@bama.ua.edu
with only the following in the body: UNSUBSCRIBE EVALTALK
To get a summary of commands, send e-mail to listserv@bama.ua.edu
with only the following in the body: INFO REFCARD
To use the archives, go to this web site: http://bama.ua.edu/archives/evaltalk.html For other problems, contact a list owner at kbolland@sw.ua.edu or carolyn.sullins@wmich.edu
Welche Anforderungen?
Wer macht Evaluation?
Date: Wed, 21 Jul 2004 08:29:49 +0100
From: bill fearSubject: Agendas, decisions and using evaluation The debate about the role of the evaluator in relation to getting the evaluation used has had a long and perennial history re-emerging, as ever, about once every five years. There are a couple, or more, important points that are consistent (IMESHO):
1) No evaluator has the right to assume that their findings will, or should, be used as a number of people have just recently noted. This right is the preserve of auditors. 2) There are two ways to maximise the value of an evaluation: a) involve stakeholders from the off (Patton); b) link evaluation to budgets (Australia; the Netherlands). 3) Most interestingly, a piece of work by by the NAO (probably by Chelimsky and published around 5-8 years ago; sorry, I have a real problem remembering references) showed that high quality evaluations tend to be rejected initially. However, these same evaluations usually have an impact around five years - that's 5 years - later, usually at the conceptual level. Ergo, an acid test of a good evaluation that has been carried out independently of stakeholders may well be the degree of initial resistance and rejection. Indeed, it may be that an evaluation has more impact if the evaluator does not try to get it taken account of. Just think through what we know about decision making.
On that point, any good evaluator surely must, surely absolutely must, have an understanding of decision making from the individual level to the organisational level.
Helpful references are:
at an individual level
www.bps.org.uk then click on 'publications' then 'the psychologist' then 'search the psychologist online' then 'volume 15 (2002)' then 'volume 15 part 2(February 2002)' then look at articles 4, 5, 6, 7. Easy reading to a high standard (mostly).
and
Gilbert, D. and Wilson, T. 'Miswanting.' www.wjh.harvard.edu/~dtg/Gilbert%20&%20Wilson%20(Miswanting).pdf (or put 'miswanting' into google)
At an organisational level it is still, for me, the stock in trade publication of 'Organsiations: Structures, processes and outcomes' by Hall.
We might also want to consider that US Senators apparently spends just 7 minutes a day reading on average and that for a GP to keep up to date with current relevant medicine they need to read for 17 hours a week (mostly non-fiction, or at least not knowingly fiction).
And then of course there is the values of the evaluator. Our values tend to drive our behaviour - although they don't have to. Not judging others on the basis of their values, which may conflict heavily with our own, is immensely difficult. So, we may assume that our evaluation should be taken account of according to our values, but the values of the person on the other side may be different. And somehow we have to find a way not to let that influence our behaviour and to respect the values of the other/s. After all, there is no moral 'right' or 'wrong', and ethics are consensus of agreed rules depicting right and wrong, and not a universal absolute, and there is no known set of universal values.
At 1:36 PM -0400 13/10/04, Jill Ibell wrote:
>>Please let me know specific interview questions that you have found >>helpful in prior program evaluations. The use is for an internal >>program evaluation process, which has recently been started on a >>more formal basis than prior years trouble shooting operations.
Here's my generic, use anywhere, run out of ideas evaluation questions. They are based on Vygotskyian learning theory and action research practice.
these are my standby, run out of bright ideas, interview questions that have never failed to get some interesting and valuable responses. I've tried to turn them into something that relates to what you are interested in, but you get the general drift :-
As I get older I half begin to think that these may be the only questions you need to ask. In my experience, the responses are incredibly rich and insightful about people's judgement of worth, and it forces them to base their responses on observable or justifiable data.
Cheers
Bob
BOB WILLIAMS bobwill@actrix.co.nz Check out the free resources on my WEB site http://users.actrix.co.nz/bobwill
Mobile (64) 21 254 8983
... there are always exceptions. Reality is too complex to be captured by theory. I'm reminded of the general semantics principle that "the map is not the territory"-that a theory is distinct from the reality it purports to represent. Bob Dick
EVALTALK - American Evaluation Association (AEA) Discussion List. See also
the website: http://www.eval.org
To unsubscribe from EVALTALK, send e-mail to listserv@bama.ua.edu
with only the following in the body: UNSUBSCRIBE EVALTALK
To get a summary of commands, send e-mail to listserv@bama.ua.edu
with only the following in the body: INFO REFCARD
To use the archives, go to this web site: http://bama.ua.edu/archives/evaltalk.html For other problems, contact a list owner at kbolland@sw.ua.edu or carolyn.sullins@wmich.edu
Original Message --------
Subject: Re: Looking for methodologies to identify/choose stake holders
Date: Sun, 14 Nov 2004 12:38:50 -0800
From: Avichal Jha
Hi Jonny,
Michael Patton's "snowball" sampling technique comes to mind. You can find a discussion of different techniques in "Utilization Focused Evaluation," published by sage. I believe the 3rd is the most recent edition. Carol Weiss also has a great discussion on involving stakeholders in "Evaluation: Methods for Studying Programs and Policies."
What the discussion boils down to is context: What are you evaluating? The evaluand itself should suggest at least a limited group of stakeholders; i.e., those who asked for the evaluation. In the case where we're evaluating policy, this may not be the case. In that situation, the context becomes that of the policy. As long as you have a single stakeholder in mind, ask that stakeholder for who other stakeholders might be. This process, repeated with each new stakeholder, will "snowball" into a much larger sample.
This is just one of the ways that Patton and others have discussed. I hope it helps (although my gut feeling is that this is more useful for program evaluation than policy analysis). As I suggested, if you haven't already looked at Patton and Weiss, I think you'll find their work very helpful.
Best of luck, Avi
Avichal Jha, M.A. Doctoral Student Evaluation and Applied Methods Claremont Graduate University avichal.jha@cgu.edu
Original Message----- From: American Evaluation Association Discussion List To: EVALTALK@BAMA.UA.EDU Sent: 11/14/2004 10:20 AM Subject: Looking for methodologies to identify/choose stake holders
We all agree that it is important to involve stake holders in various phases of the evaluation life cycle. But how to identify the population of relevant stake holders and choose among them? My sense is that we tend to use the "I will know them when I see them" method. (It's what I do.) But are there more deliberate and systematic ways to go about it? Has anyone tried to develop a methodology? If anyone has relevant references, please send them my way. Thanks.
Jonny Jonathan A. Morell, Ph.D. Senior Policy Analyst
Street address: 3520 Green Court Suite 300, Ann Arbor Michigan 48105 Mail address: PO Box 134001 Ann Arbor, Michigan 48113-4001
Desk 734 302-4668 Reception 734 302-4600 Fax:734 302-4991 Email jonny.morell@altarum.org
"Once upon a time there was a word. And the word was evaluation. And the word was good. Teachers used the word in a particular way. Later on, other people used the word in a different way. After a while, nobody knew for sure what the word meant. But they all knew it was a good word. Evaluation was a thing to be cherished. But what kind of a good thing was it? More important, what kind of a good thing is it?" (Popham, 1993, p. 1)
"Evaluation - more than any science - is what people say it is; and people are saying it is many different things." (Glass & Ellet, 1980, p. 211)
"Research is aimed at truth. Evaluation is aimed at action." (wird M.Q.Patton zugeschrieben, Quelle mir unbekannt) Richtig muss es heißen: "Research aims to produce knowledge and truth. Useful evaluation supports action." (Patton, 1997, p. 24)
"Irgend etwas wird von irgend jemandem nach irgendwelchen Kriterien in irgendeiner Weise bewertet." (Kromrey, 2001, S. 21)
"[...] evaluation has two arms, only one of which is engaged in data-gathering. The other arm collects, clarifies, and verifies relevant values and standards." (Scriven, 1991, p. 5)
"The evaluation responsibility is a responsibility to make judgements." (Stake, 1979, p. 55)
"In God we trust. All others must bring data." (Robert Hayden, Plymouth State College, zit. n. http://www.keypress.com/fathom/jokes.html; Berk, 2007 zitiert abweichend W. Edwards Deming als Urheber)
"Der Umfang der Gefahren bei der konkreten Forschung wirkt sich auf viele Praktiker mit Sicherheit nicht gerade ermutigend aus. Scheint doch das Einzige mit Gewißheit Vorhersagbare zu sein, daß immer etwas falsch gemacht werden wird." (Wittmann, 1985, S. 187)
"The notion of the evaluator as a superman who will make all social choices easy and all programs efficient, turning public management into a technology, is a pipe dream." (Cronbach et al., 1980, p. 4)
"Once upon a time, the evaluation researcher needed only the 'Bible' ('Old Testament', Campbell and Stanley, 1963; 'New Testament', Cook and Campbell, 1979) to look up an appropriate research design and, hey presto, be out into the field." (Pawson & Tilley, 1997, p. 1)
"To make research work when it is coping with the complexities of real people in real programs run by real organizations takes skill – and some guts." (Weiss, 1972, p. 9)
"[...] what the professional independent evaluator brings to the party is a fresh eye and some technical skills." (Scriven, 1997, p. 499)
"One requires:
a good sense of humour;
and a thick skin.
Above all else, don't take yourself too seriously (and try not to be paranoid when having inappropriate discussions in a public space.)" (Fear, Bill. Career in Evaluation - Opinions wanted. EVALTALK
"The world of evaluation is a frighteningly real world. [...] The actors in the educational drama are strikingly human, with all the attendant frailties of real people." (Popham, 1993, p. 217)
"Evaluators who steel themselves against the probable perils of reality will be less shocked when they try out their shiny new evaluation skills." (Popham, 1993, p. 217)
"Recently, I opened an evaluation process with a staff workshop in which I invited participants to share perceptions of and metaphors for evaluation. The program director went to a nearby closet, took out a vacuum cleaner, turned it on, and pronounced: 'Evaluation sucks!'" (Patton, 1997, p. 267)
"Doing a good evaluation is not a stroll on the beach." (Weiss, 1997, p. 325)
"I find that I have to begin every evaluation exercise by finding out what people’s previous experiences have been with evaluation, and I find many of those experiences have been negative." (Patton, 2002, p. 131)
"The wolfdog of evaluation is acceptable as a method of controlling the peasants, but it must not be allowed into the castle – that is the message which each of these ideologies represents, in its own way." (Scriven, 2000, p. 252)
"Evaluation avanciert zum neuen Kampfbegriff in der Qualitätsdebatte" (Schratz, 1999, S. 64)
"The more evaluation, the less program development; the more demonstration projects, the less follow-through" ("Wilensky's Law", Wilensky, 1985, S. 9)
"In many educational systems everybody seems to hate external evaluation while nobody trusts internal evaluation." (Nevo, 2001, p. 104)
"We live in a knowledge-centred, value-adding, information-processing, management-fixated world which has an obsession with decision-making." (Pawson & Tilley, 1997, pp. xi-xii)
"[...] 'evaluation' has become a mantra of modernity." (Pawson & Tilley, 1997, p. 2)
"I've often referred to the difference between Evaluation and evaluation. Oddly enough evaluation is a much bigger endeavour. Everyone does it often with great rigour, sometimes with a rigour we don't comprehend or agree with. On the other hand Evaluation is our patch of earth and a small one in the grand scheme of things." (Williams, Bob . Re: A Sunday meditation (definitely about evaluation), EVALTALK
"In the end it's politics!" (Capela, Stan . Re: A Wednesday clarification (longish and occasionally peevish), EVALTALK
"Evaluation research 1963-1997
Must do better. Too easily distractred by silly ideas. Ought to have a clearer sense of priorities and to work more systematically to see them through. Will yet go on to do great things." (Pawson & Tilley, 1997, p. 28)
"Evaluation no longer has the luxury of a-empirical theoretical development." (Smith, 1993. p. 241)
"What is evaluated? Everything. One can begin at the beginning of a dictionary and go through to the end, and every noun, common or proper, calls to mind a context in which evaluation would be appropriate" (Scriven, 1980, p. 4)
"Social programs are complex undertakings. They are an amalgam of dreams and personalities, rooms and theories, paper clips and organizational structure, clients and activities, budgets and photocopies, and great intentions." (Weiss, 1998, p. 48)
"Unfortunately, except in a few areas, planning of social programs proceeds more by the seat of the pants and the example of 'what everybody else is doing,' than it does by thoughtful and critical review of evidence and experience." (Weiss, 2002, p. 204)
"The purpose of evaluation is not to prove, but to improve." (Egon Guba, zit. n. Stufflebeam, 2004)
"Evaluation's most important purpose is not to prove, but to improve." (Stufflebeam, 2004, p. 247)
"Ergebnisse einer Evaluation sind nicht Daten, sondern Entscheidungen über Konsequenzen für die weitere Arbeitsplanung." (Burkard & Eikenbusch, 2000, S. 29)
"We are impressed by the creativity in the field of evaluation, yet at the same time concerned because evaluators often forget or fail to emphasize the basic purpose of their work." (Glass & Ellet, 1980, p. 212)
"While I do think that people who invent terms have some obligation to argue against careless shifts from their original meanings, they also have an obligation to be open-minded about serious arguments for modification or clarification of the original definitions." (Scriven, 2004, p. 17, in JMDE No. 1)
"For a time it appeared that an educational evaluation model was being generated by anyone who (1) could spell educational evaluation and (2) had access to an appropriate number of boxes and arrows." (Popham, 1993, p. 23)
"One gets the impression that what passes for evaluative research is indeed a mixed bag at best and chaos at worst." (Suchman, 1967, p. vii)
"From the ambitions of the academic disciplines, from the convulsive reforms of the educational system, from the battle-ground of the War on Poverty, from the ashes of the Great Society, from the reprisals of an indignant taxpaying public, there has emerged evaluation." (Glass, 1976, p. 9)
"There was a general concern over the poor academic performance of our nation's youth. ... The quest for accountability had begun." (Baron & Baron, 1980, p. 85-86)
"Our search as lay historians reveals that the the first recorded instance of evaluation occurred when man, woman, and serpent were punished for having engaged in acts which apparently had not been among the objectives defined by the Program circumscribing their existence." (Perloff, Perloff & Sussna, 1976, p. 264)
"In the beginning, God created the heaven and the earth. And God saw everything that he made. "Behold," God said, "it is very good." And the evening and the morning were the sixth day. And on the seventh day God rested from all His work. His archangel came then unto Him asking, "God, how do you know that what you have created is 'very good'? What are your criteria? On what data do you base your judgment? Just exactly what results were you expecting to attain? And aren't you a little close to the situation to make a fair and unbiased evaluation?" God thought about these questions all that day and His rest was greatly disturbed. On the eighth day God said, "Lucifer, go to hell." Thus was evaluation born in a blaze of glory." Halcolm's The Real Story of Paradise Lost (Patton, 1997, p. 1)
"The difference [between quantitative and qualitative researchers] is that, while a quantitative reporter would say 'Only ten persons were present ...,' a truly qualitative reporter would say, 'Attendance at the session was depressing.'" (Sechrest & Figueredo, 1993, p. 655)
"We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute." (Smith & Pell, 2003, p. 1459)
"You mean you guys actually look at the evaluations? I taught two sections of the same class last semester, and I stopped reading the evaluations after about the sixth section I taught. Most are positive, some wish I would die, and none provide useful feedback." (tuxthepenguin auf http://chronicle.com/forums/index.php?topic=69226.0)
"There is nothing a Government hates more than to be well informed; for it makes the process of arriving at decisions much more complicated and difficult." (John Maynard Keynes, The Times, March 11, 1937, p. 18)
"The program theory approach has exposed the impoverished nature of the theories that underlie many of the interventions we study." (Bickman, 2000, p. 107)
"A program is a theory and an evaluation is its test." (Rein, 1981, S. 141)