logo

Point K [Register]

Keep Point K Learning Center free. Donate now. Click to learn more.

Donate Today!
|
 

Dispelling Evaluation Myths


Many organizations do not take the time or devote the resources to conduct program evaluation, but we’ve found that their reasons for avoiding evaluation are often based on misconceptions. We’re determined to break down the myths. Come learn the truth about evaluation, and see why it may be easier than you think to start evaluating your programs today.

#1 Myth: Evaluation is scientific, and must be done only by experts.

Truth: Evaluation is systematic, and can be done by anyone. We define evaluation as “the systematic collection of information about a program that enables stakeholders to better understand the program, improve its effectiveness, and/or make decisions about future programming.” Do-it-yourself tools, like our Logic Model Builder at the Point K Learning Center, are available to walk you through the evaluation process.

#2 Myth: Evaluation plans require you to evaluate all programs.

Truth: In an ideal world you would have the time and resources to evaluate all programs. In the real world, we all have to pick and choose. When deciding which programs to evaluate, consider the following questions:
  • How important is the program to your organization? To the community you serve?
  • How many people does the program impact?
  • How much talent and money are at stake if the program is not evaluated?
  • Does the evaluation itself have the potential to provide valuable learning that you could use in future programs?
  • Is evaluation necessary to get buy-in from key stakeholders and prospective funders?

#3 Myth: Quantitative data is better than qualitative data.

Truth:  While quantitative data is an important part of evaluation, some types of information are better communicated through qualitative data. We suggest that you use both – they are complementary.  Be sure to match the data collection method to your audience.  Qualitative data from interviews, journaling, and focus groups can help you to capture "the story," which is the most powerful depiction of the benefits of your services.

#4 Myth: An evaluation plan must be perfect in order to be effective.

Truth: Don't worry about perfection.  It's far more important to do something systematically and consistently, than to wait until every last detail has been tested. Also, don’t forget to be flexible in your design.  Allow for change or expansion in midstream if program objectives change or evaluation data shows a new direction for inquiry.

#5 Myth: Evaluating only success stories ensures positive evaluations.

Truth: An effective evaluation should assess both the positive and negative aspects of the program you are evaluating.  You can learn a great deal about a program by understanding its failures, dropouts, and missed opportunities – and that understanding will illuminate and improve future programming.  That doesn’t mean that you should only focus on negative aspects. Determine what tone will resonate with your audience and focus in on the victories, the challenges, or both.

#6 Myth: Raw data is useless after an evaluation report is written.

Truth: Raw data is the foundation for your evaluation. Hold onto it!   It might become useful again as the program continues to develop. Raw data can provide precious information later, and save time and resources when new issues arise.





References:

Evaluation Pitfalls to Avoid by Carter McNamara, PhD http://www.mapnp.org/library/evaluatn/fnl_eval.htm#anchor1586742

The Family Advisor: Grant Evaluation
35 Keys to Effective Evaluation by Robert M. Johnson, Excerpt from Evaluating for Foundations by the Council on Foundations. (no longer in publication); also found in the Foundation News & Commentary article "35 Keys to Effective Evaluation" by Robert Johnson (May/June 1993). www.cof.org

Login | Newsletter Signup | Contact Us | Website Policies | Twitter | Facebook | Help
© 2002-2024 Innovation Network. All Rights Reserved