logo

Point K [Register]

Keep Point K Learning Center free. Donate now. Click to learn more.

Donate Today!
|
 

Advocacy Evaluation's Home at AEA:
An Interview with Advocacy and Policy Change TIG Leaders

Last year, for the first time, those interested in evaluation of advocacy formally had a home at the American Evaluation Association’s (“AEA”) annual conference.  The Advocacy and Policy Change Topical Interest Group (or "TIG") was formed in 2007.  AEA TIGs are defined around a special topic of interest to subgroups of AEA members.  TIGs coordinate their efforts by reviewing conference session proposals in their area of interest and developing a “track” of topically-related sessions for the annual AEA conference.

The leaders of the Advocacy and Policy Change TIG are:

Advocacy Evaluation Update’s Sue Hoechstetter interviewed Julia, Justin, and Ehren for this issue.  Look for an interview with Co-chair Astrid Hendricks in an upcoming issue.

Fundamentally, the approach that all of us use,
and that we advocate for,
is not just about measurement.
It's about using information to inform strategy. 

~ Julia Coffman


> Update:
  How did having a policy and evaluation TIG for the first time—and thirteen advocacy-related sessions—affect the 2007 AEA annual conference?

Ehren:  Two of the very positive things that struck me about the 2007 workshops were the depth and variety of information presented, and the number of people presenting on advocacy evaluation.  The 2006 conference in Portland was really the beginning of a presence for this kind of work at AEA.  There were five or six sessions, and a small number of people did most of the [advocacy-related] presenting there.  We had about five times as many people involved at the 2007 meeting—more people and more ideas.

Julia: I was really encouraged this past year by the number of people in attendance and by the level of interest in the breadth of workshops we offered.  We know that the TIG workshops filled a gap.  One woman there told me that she was an evaluator for the Humane Society’s advocacy work and, last year, for the first time, she felt included at the AEA conference.

> Update: What plans do you have for the Advocacy and Policy Change TIG in 2008, and will that include anything outside of the AEA conference?

Julia: We want to respond to some of the things we heard from TIG members last year about what they want, like connecting with other topical interest groups.  Members wanted to reach out to other TIGs to co-sponsor some workshops.  We also want to invite people from other groups to come to our sessions in order to enrich the conversation.  Much of what we do at the next conference will depend, though, upon the proposals for workshops that we receive.

Justin:  It does really depend on the workshops that get proposed, but my thoughts are on looking to the next level of work as more people have come to understand how to evaluate advocacy.  My knowledge, for example, has changed each year based upon what I hear talked about at AEA.  I would add that one of the structural challenges at AEA is that it is difficult for different TIGs to collaborate, so it would be helpful if AEA leadership would think more about facilitating connections between the groups.

Julia:  We do need more information about how to do this work, such as how to use real-time methods.  Each of us is going to take what we hear at the conference and incorporate that into our work, perhaps for the next couple of years. 

Ehren: We are not planning to do anything outside of AEA, because the TIG is conference-centric.  Bringing information about this topic to a broader audience than it would otherwise reach in order to share our learning is the priority.  Another thing we can do back in our organizations is model information sharing and partnering, to contribute to learning in the field.  Innovation Network is always trying to do this by putting information up on our website’s resource center, and we try to work with others instead of competing with them.

> Update:  What are your thoughts about requests for evaluations to show the actual impact of policy changes brought about by advocacy work? 

Justin: That exact question came up a number of times at AEA last year.  People in the TIG have traditionally done evaluation of policy advocacy, and the question of impact is about the effects once a policy gets approved and implemented.  Intensive resources are required for that work, resources far beyond what most foundations are willing to support.  That's why big research institutes and government agencies like the Government Accounting Office usually do that work.  At the same time, I hear people in the TIG and I hear funders say they need to move towards this kind of learning.  Personally, I try to step them back for reasons of resources.  For example, I've been working on evaluating the impact of community organizing on school reform.  To really understand the impact of organizing on students’ achievements requires a different level of effort beyond what funding usually supports.  In some ways we are reliant on research already done that says when you get to a certain point with the schools, it is likely that this will lead to improved learning outcomes.  At the same time, I think foundations are going to ask for more.  We have to look at what that means to us as a TIG and for us as a field.

Julia: I agree with Justin and would add that it's fundamentally about evaluating what foundations are supporting.  If they are supporting policy work, that's what we evaluate.  If the foundation is funding monitoring and implementing policies, then that's what we evaluate.  It doesn't make sense to leap to measuring beyond what is funded.  At the same time, we do need to know more about evaluating efforts focused on implementing what comes after policy change.

> Update:  What do you think should be done next in the advocacy evaluation field? For example, do you have thoughts on how work being done by third-party evaluators can help organizations that cannot afford that kind of assistance?

Ehren:  I'm glad you mentioned those that can't afford third-party evaluators, because that's key.  Generally we need to expand the depth and breadth of evaluation of advocacy work.  I think of four things that we should do:

  • Motivate all organizations to evaluate their advocacy work;
  • Promote the evaluation efforts of advocacy organizations;
  • Provide more information about data collection and how you do it; and
  • Provide more information about using the data once it is collected.

In addition, the conversations we've had to date have largely been around legislative policy and the legislative planning stage.  There is good opportunity to expand now to also talk about policy implementation and judicial and grass-roots work.

Finally, our learnings have often come from key opportunities where we work with one organization or coalition and apply it to others.  These case studies are labor-intensive and cost prohibitive, and do not necessarily apply to all organizations.  We need to translate advocacy evaluation learning so that it can be used by all nonprofit organizations by:

  • Creating toolkits to help small and medium-size nonprofits;
  • Developing tools for evaluators that are beginning, or have not yet begun, the conversation, so they can build their capacity to work with organizations;
  • Learn more from what has already been accomplished on advocacy evaluation in the international field; and
  • Learn from government what advocacy works and doesn't work, and apply that knowledge to advocacy evaluation.

I'm glad you mentioned organizations that can't afford third-party evaluators. We need to translate advocacy evaluation learning 
so it can be used by all nonprofit organizations

~ Ehren Reed

Justin: I agree with Ehren that applying information to those organizations that can't afford third-party evaluation should be a priority.  I have a sense that is happening as I know that it is a priority for Innovation Network and for Alliance for Justice.  In my work, we train nonprofits on how to do their own evaluations.  My goal is to translate the tools and systems to a broader field so that more people can use them.  At AEA I want to present strategies that are not just for evaluators, but that can be used also by individual nonprofits to do their evaluations.  However, the biggest challenge for nonprofits now is not how to implement these tools, but how to implement changes in the culture within their organizations.  And, without an external force, that can be difficult.

Julia: I agree with that.  Fundamentally the approach that all of us use, and that we advocate for, is not just about measurement.  It's about using information to inform strategy.  Simply putting out tools for organizations is not necessarily going to get us to that point right away.  It made sense for us to start where we started, in a lot of cases with organizations that have quite a bit of capacity to measure and learn, and where there were resources for evaluators to come in and help figure things out.  We tested and are testing ideas and figuring out what works, and then taking the ideas we develop to find out how they apply to different advocacy organizations.  There is a progression in the field and we are getting where we want to go.

> Update: Thanks so much for taking the time to talk with Advocacy Evaluation Update!

Please send any questions about this interview to advocacy [at] innonet [dot] org.

The biggest challenge for nonprofits now is not how to implement evaluation tools, but how to implement changes in the culture within their organizations. 

~ Justin Louie

<<  Back to newsletter front page

Login | Newsletter Signup | Contact Us | Website Policies | Twitter | Facebook | Help
© 2002-2024 Innovation Network. All Rights Reserved