logo

Point K [Register]

Keep Point K Learning Center free. Donate now. Click to learn more.

Donate Today!
|
 

Driven by Results: An Interview with AECF’s Tom Kelly


Adopting RBA provided us with a way of being clearer to ourselves, our trustees, and grantees about what we’re trying to achieve and how we’ll achieve it together.   ~ Tom Kelly


The Annie E. Casey Foundation (“AECF”) is on the cutting edge of advocacy evaluation. In 2004, the Foundation adopted and adapted Mark Friedman’s Results-Based Accountability (“RBA”) model to measure the difference it makes in the lives of America’s children and families. AECF is embedding this system both in its own operations and in its work with grantees and partners. The Foundation’s commitment to RBA is expressed in its Five Year Benchmark: “AECF will be seen as the most continuously data-driven, evidence-based and results-oriented of all U. S. philanthropies.” The Foundation has included advocacy evaluation as part of its results accountability model. Tom Kelly, Evaluation Manager in AECF’s Measurement, Evaluation, Communications & Advocacy department, recently discussed AECF’s advocacy evaluation with Innovation Network’s Johanna Gladfelter.

InnoNet: Tom, thanks for taking the time to talk about your experiences and insights into advocacy evaluation. Could you begin by explaining your role at AECF?

Tom: I am the Evaluation Manager within the Measurement, Evaluation, Communications & Advocacy (MECA) department at Casey. As a unit we provide consultation and technical assistance internally to program officers, and externally to grantees around issues of data and evaluation across the foundation. We also lead and manage formal evaluation contracts and activities with our program officer colleagues. MECA is a cross-foundation support unit. Its main objectives are to support the organization’s efforts in policy advocacy by providing data from our own evaluations or from other programs or research, and to conduct strategic communications such as with the KIDS COUNT data. The functions within MECA are all in service of the Foundation’s core outcome: advance systems, policies, and practices that support vulnerable children. Evaluation is seen as a function of policy advocacy ability. We are better able to advocate for the policies based on data, evaluation, and experience. Evaluation is seen in service to that broader goal.

InnoNet: In 2004, AECF adopted Mark Friedman’s RBA model. How is this model used to evaluate the work of the Foundation?

Tom: Doug Nelson [AECF President] and the Foundation leadership needed to respond to our trustees who had asked us to be more explicit about the effects of our grantmaking in terms of long-term outcomes and the impact on children and families. What was attractive about Mark Friedman’s initial model was the question, “What difference does it make?” That is a question that our trustees ask us in a very concrete way. “How many more families were affected?” “How many more children served?” “Did this program actually result in a change in indicators that we would see at the population level?” The model focused our attention not only on the types of questions we were asking ourselves but also on the types of questions our trustees were asking us. Also, there were many people here at Casey who had worked with Mark [Friedman] in the past. Donna Stark (Director of Leadership Development at AECF) was in the Maryland state government with Mark and they used an early frame of his RBA model. So I believe we chose this model because it was asking us to think about the types of questions we already knew we wanted to answer, and a number of people on staff were already familiar with the model itself.

We also needed to bridge the multiple mechanisms of collecting and communicating data across the Foundation. Whether it is a logic model, theory of change, or outcome-based grantmaking, there is evaluation data wrapped around all of these things. We knew that a common language platform would make it easier for our trustees to hear one cohesive message across the Foundation, and that our lack of a common language platform kept us from telling a complete and integrated picture of the outcomes and impacts of the Foundation’s investments over time in a way that made sense and in a way that tied our work to long-term outcomes for children. It is more difficult for people to recognize or to take credit for successes that are much further down a chain of events—and advocacy being one of those whose successes are much further removed—than an impact on children and families through direct service delivery.

Doug and Foundation leadership understood that this was a huge cultural shift, not just a programmatic shift. It wasn’t just mandating that everyone was going to count a specific way; it was thinking about our internal processes. Casey programs and initiatives that had been formally evaluated had gone through the process of defining themselves, their theory of change and outcomes, but that represented only a small portion of the overall Casey portfolio. Adopting RBA did not replace our formal evaluation processes. It provided a framework around data and evaluation for those other areas of work that might be either too nascent to be formally evaluated, or in the case of advocacy, an area of work we never really put to the test by asking what impact or difference did the investment make. Adopting RBA was about changing expectations and perceptions and, even harder, the behaviors here at Casey about how such a framework can be used within portfolios, across portfolios and across the foundation as a whole. Adopting RBA provided us with a way of being clearer to ourselves, our trustees, and grantees about what we’re trying to achieve and how we’ll achieve it together.

InnoNet: More specifically, how is the RBA model used to evaluate the Foundation’s advocacy work?

Tom: KIDS COUNT grantees represent a large portion of our state-based advocacy investment. We have the longest history and relationship with many of these grantees. There is certainly unevenness in terms of capacity: they are not all the same size and they are not all structured in ways that fully support policy advocacy. The purpose of Casey’s early investment in KIDS COUNT was to make sure there was data to support the policy conversation. Our new interest became seeing how advocacy organizations, such as the KIDS COUNT grantees, better describe the progress that they are making within a multi-year frame using RBA. Using the Casey framework, we began providing trainings to grantees whose work was easier to document and demonstrate—work that tended to be direct service provision. After training direct service grantees, we then began to train grantees working with system or practice change. I think advocacy grantees were on the back burner because we were trying to get our feet wet in terms of how to best apply the model to outcomes that are more distally related to the work. In doing this, we could also be more specific about what it would take for a Kids Count grantee to be able to better report on its results. It’s harder for people to connect themselves on an individual grant level to a population level change that involves many other people working on the same issue. The model plays a critical step by getting policy advocacy grantees to be more explicit about the results they can and are achieving within a given timeframe. The exercise of going through the RBA process clarifies to both the program officer and to the grantee the shared expectation about investment effect and expected results.

Our advocacy grantees were late in the training because we wanted to be sure that we had appropriate examples. We wanted them to be more involved in testing models and trying different approaches and languages. In addition, we do have other grantees that sometimes are advocacy grantees and at other times are service providers who we wanted to involve in advocacy more. The model’s frame of “Impact, Influence, and Leverage” went a long way in helping even the service provider ask, “What influence am I having on the systems around me?” “What influence am I having on the policies for any of the services I deliver?”

When the program officer and the grantee discuss the model there is at least an initial conversation about expectations and what should the grantee be trying to achieve. From the feedback we’ve received, it was in some ways very liberating for some grantees to know that they’re not on the hook for some direct impact results but they are on the hook for something else and, ultimately, they need to be clear about what that is. However, what we should ask ourselves, what our trustees would ask us, and what the public should ask us is, “What difference did this make in the lives of children?” To me this is a connection between what we know we accomplished, how well we accomplished it, and how it had an impact.

InnoNet: You’ve been a part of many of the field-building conversations related to advocacy evaluation. Why are AECF and other funders interested in advocacy evaluation?

Tom: There are many evaluation tools and methodologies that could apply to advocacy evaluation but people haven’t had shared frameworks, vocabulary, or long-term expectations, and there is no single replicable model because of context and time shift. AECF and other funders (including the California Endowment and The Atlantic Philanthropies) are committed to a belief that evaluation is hugely important for anyone in terms of self-management, progress, and achieving results. As evaluators within foundations, we saw that the readily accessible evaluation tools focus on the more easily measured data, but we know that advocacy has a lot of things that are difficult to measure. Even when it’s something like public will—something that, on the surface, may be easy to measure—it’s expensive to go out and do a survey on public will. So we wanted to flesh out what are the options, strategies, and decision-making around deciding what and how to evaluate advocacy work.

We also recognize that many local advocacy organizations are small, lightly staffed, and don’t have the resources, time, and expertise to bring to bear. We have advocates that are in very small organizations working within their states and they need some of these skills. There wasn’t a model of sharing that level of technical assistance for advocacy evaluation similar to United Way’s outcome measurement resources for direct service providers. We needed a different frame of language for a new audience who didn’t focus on service outcomes. This was a newer approach.

There was an additional dynamic that we were hearing from smaller advocacy organizations that these things—advocacy strategies and learnings—were top-of-mind for nonprofit directors and had not been made explicit for the next generation of leaders. Using the Composite Logic Model or other tools, we wanted to help an advocacy organization define a better strategy, one that could be measured, carried on, replicated and supported by other advocates.

If grantmakers don’t have a clear grantmaking theory of change about policy advocacy success, nothing the advocate can do to measure themselves will help the funder understand. We need to be clear about what foundations need to know to have more informed partnership conversations with their advocacy grantees. The relationship between a funder and an advocacy grantee isn’t a traditional “we’re buying outcomes” relationship; it involves partnership, communication, and shared expectations. We looked to this work not as coming up with something new (i.e., wildly different methodologies), but a whole different frame that was needed to introduce more effective evaluation technologies into advocacy work.

InnoNet: What challenges do foundations that engage in advocacy face? Does advocacy evaluation help them overcome any of those challenges?

Tom: I think a potential challenge is that we’ll become too simplistic about activities and outputs, and, knowing that outcomes are distal, not push hard enough to ask people about the linkages between their strategies and theory of change.

I also think that time is a challenge. Advocacy grantees, like everyone else, don’t have a lot of time. There is going to be a tendency to simplify when, in fact, the context advocacy grantees are operating in is so complex that it requires more attention about how success was or was not achieved.

The plus side is that I have seen examples from our advocacy grantees that when they are more explicit about intermediate outcomes and more concrete regarding their contribution to the overall, they’re much better fundraisers for advocacy. By linking the short-term to the long-term outcomes, or in RBA by linking the programmatic level to the population level, we are able to be much more explicit around success and results achieved along a much longer timeline for change than is normally apparent in direct service provision.

InnoNet: Thanks again for your time, Tom.

For more information on AECF’s RBA model, visit http://www.aecf.org/Home/OurApproach/DataAndEvaluation.aspx.


<< Go back to Advocacy Evaluation Update

Login | Newsletter Signup | Contact Us | Website Policies | Twitter | Facebook | Help
© 2002-2024 Innovation Network. All Rights Reserved