Excerpts from A Guide to Evaluating Asset-Based Community Development: Lessons, Challenges and Opportunities by Thomas Dewar.
The following ten principles are offered as elements to consider in evaluating asset-based community development. They are not meant to be a model, or even a complete checklist. Rather, each principle reflects lessons learned in community building, and is at least worth considering in the process of designing and carrying out evaluations appropriate to asset-based community development.
1. Involve participants directly in the process.
Perhaps the most basic challenge in making evaluation useful is to center it round those actually doing the work. This requires considerable time and energy, right from the outset, but it is well worth it.
As a first step, it is very important to acknowledge how much the people doing this work already know. It is not as if the work of evaluating these efforts starts from scratch. Those directly involved typically have a very clear sense about progress and impact, and have often developed and improved practices based on their sense of what is working and what isn’t. Good evaluation starts with this kind of experience and looks for ways to record it that are credible, clear and persuasive to others. In fact, the people doing this work are the real experts, and ought to be the first point of reference. This may surprise many community actors, who have experienced the evaluation process as something that has not involved them directly and, when it happens, it happens to them rather than with them. They need to know that their careful observation, good listening, and common sense are highly valued. Their knowledge is the primary “asset” in evaluation.
The value of co-discovery
At its best, evaluation cannot be done solely from the outside in. It is a mutual process whereby those directly involved in the activity are both discovering or learning for themselves – and find ways to share this with others – as well as learning new things from outsiders, or fresh listeners. Building some internal practice around evaluation, which might record certain reflections or note key issues as they are discussed, is the starting point. But clearly the outside view is also critical.
Co-discovery applies to planning as well as carrying out the actual evaluation. There is no one way to do this but it is critically important that some appropriate ways be found in all cases at the beginning of the process. Questions for everyone to ask in the beginning include:
- What is the purpose of the evaluation?
- Who are the most important audiences?
- What would we like to learn?
- What questions will guide us?
Once the initial design has emerged, it is important to remember that the conduct of the evaluation work should strengthen or complement rather than distract from he work. Often this involves thinking through ways to rely on community people to gather information as they make their daily rounds, rather than on “evaluators” or third parities. Building the trust between participants and evaluators is key, and working together both to design and carry out the evaluation helps to build that trust.
2. Know your audience.
Like every form of communication, each evaluation has an audience. Ideally, this audience can be identified and understood. The best evaluations are purposeful about this, and focused on known audiences. They are done with someone in mind. Who is that someone, and what does that person want to know? This focuses the work.
The primary may be those actually doing the work, the participants. In other cases, the primary audience, the one that participants tend to worry about, are the interested nonparticipants, such as funders or other practitioners. Clearly, these two kinds of audiences interact – inside and outside – and the boundaries between them are not always clear. Most useful evaluations are done for both, and often operate on the principle that if those directly participating are truly informed so too will others who are less directly involved.
Specifically, external audiences vary from situation to situation, but typically involve several different types. Here are four of the most common types of external audiences.
- People who might be considered friends, such as key allies who share an interest in the participants’ good reputation, current or former colleagues who know and like the groups and who may tend to protect rather than inform them, and others who simply like the groups or what they are doing so much that they can’t see past this to what is missing or what may not be working. Often, friends are too uncritical.
- Some who are supporters, but still want to see evidence about progress and who may be under increasing pressure to justify their support to others.
- Some who are skeptics, such as people (sometimes board members) who advocated for another project or approach than the one the group has chosen, people who have seen too many projects that are “long on rhetoric and short on results”.
- Some who are opponents, such as those who feel resources devoted to this group’s efforts detract from their own; or people who dislike or fear groups or people like those involved here, and who worry that increased democratic control may spread to threaten their power and status; or those who have decided that another way is more effective or correct. In many cases, these opponents may be immovable.
Each of these audiences is looking for something different.
3. Focus on appropriate goals and document intermediate outcomes.
Don’t give up on outcomes. Name some that do apply to the group’s goals, and track them as carefully as possible. Since so many community actors have had their feet held to the fire around outcomes they did not choose and do not accept as fair, there is an understandable tendency to become resistant to the idea of outcomes in general. Rather than giving up on the possibility of finding some appropriate outcome measure for the work, however, it is very important to find some that really fit the goals, and to track them. This will often include some short-term or intermediate outcomes.
Paying attention to outcomes is one important way in which scientific and appropriate evaluation meet. If the search for good outcomes is abandoned altogether, it is very difficult to recover credibility with neutral or skeptical audiences. Further, appropriate outcomes are important internally as well, among active participants. Tracking them can be a source of pride and can foster a greater awareness of how real progress is being achieved.
4. Document some results as quickly as possible.
This boosts morale, gives people a sense of movement, and helps develop the practice of recording important information. It also begins to name the kinds of outcomes that are realistic and appropriate for the particular group or community in question. This is important because of how charged the discussion of outcomes has become. There is often a tension in community-building work between those outcomes promised or expected by outsider observers, and those actually sought and achieved through community-building practice. This tensions is particularly important to recognize and address in the early stages of this work.
5. Develop some strong baseline evidence.
Closely related to the value of getting some results down quickly is the usefulness of having baseline or starting point information against which to gauge progress. Indeed, one of the most common gaps in telling the story of community development is the lack of documentation over time. Unfortunately, this is sometimes referred to as “before” and “after,” static categories that do not adequately describe this kind of work. Reasonable people will want to know, however, how things are going based on more than one point in time, and the more systematic one can be about this, the more powerful the story can be. Not everything can be known early on, but of the things that are know, which ones warrant continuing attention? Or, put another way, which provide the basis for documenting progress?
For community builders, it is important to track the assets as they begin to scale up, as they are connected with each other and used. Much of the power of this work is in how it unfolds – literally – one thing leads to another.
Typically, key stakeholders such as funders and institutional representatives are especially interested in baseline information. Basic demographics, combined with a “map” of a community’s resources and challenges, provide outsides with an introduction to the project. Having an initial snapshot helps establish this work as credible even though the picture may portray a view of the community that is new to the observers.
6. Be descriptive.
Many times the most valuable contribution of an evaluation is simply to describe what is happening in actual practice. Evaluators may imagine that our goal is something much fancier, something called “analysis,” and so we seek to reach conclusions about whether the project or group actually seems to work, and if so, how and for whom? But in fact, it is much more common to hear that the evaluation has been well received because it simply describes the work and its variety, themes , and dilemmas. Once described, the works is much better understood both for what it is and for what it is not.
This is why well-told stories are so powerful. Instead of taking things apart and putting them under the microscope, both evaluations and practitioners have learned that good stories put the parts back together and convey meaning in a holistic way. Stories provide concrete examples that “make the work real” and “bring it home” for people.
Interestingly, the value of being descriptive is particularly important to grassroots groups, many of whom operate under the assumption that what they are actually doing cannot possibly be of interest to others. This call for good description represents a way of respecting what community actors are actually doing, and it often reveals that people who may believe they already known what is involved, might not, in fact, understand this particular situation.
7. Be graphic.
Pictures, charts, bulletin boards, photo displays, and the like, can be a great jumping-off point for getting more deeply into discussion of what community building involves.
These visual representations don’t have to remain static; they can become dynamic. Often, in addition to being a good summary of the work, visual representations of the kinds of assets being identified can also show how assets begin to scale up or become more connected, and in the process be put to greater use. Over time, these graphics can show progress by how the maps or pictures fill in, and how the assets listed become more connected to one another, or more activated. As with the imperative to be descriptive, the call to be graphic can also be an important way to report to the community, and to participants, in ways that are both simple and engaging.
Another advantage of visual or graphic presentation is that it invites different participants. People not typically thought of as reporters or messengers about community building do become reporters when the means of communication is something other than speaking or writing. Young and old people, immigrants with limited English, shy people, and artistic people often move to center stage when the means being used are more graphic.
8. Make sure the evaluation is telling people something they didn’t already know.
People find evaluations useful when they provide new information or when they provide evidence about something they thought was true but could not really substantiate. This sounds absurdly obvious but it is surprising how often evaluations are either a form of marketing or of predictable criticism. If evaluations simply rehash what is already well-known, or package it so that it looks like something new, the process begins to seem like “going through the motions.” Even if a funder or outside party has required them, evaluations can be useful. This is especially true if they are organized around what those doing the work would like to keep track of, learn about, or improve.
9. Be open to shortcomings.
This commitment builds credibility and adds tremendously to the usefulness of the evaluation. It is never the case the shortcomings mean that nothing can be done, or that nothing worked. More likely, obstacles noted indicate that the work was difficult (sometimes surprisingly so) because of specific barriers and dilemmas that can be named, better understood, and in some cases, dealt with over time.
The successful experience of strong projects is often rooted in a full and open discussion of difficulties along the way.
Being open about shortcomings means that residents or project participants create and maintain forums where it is safe to worry about what is not working, as well as to gt a better sense of what is going well. In these forums participants can also brainstorm new or different approaches. There will always be problems and setbacks. No one expects otherwise. The challenge is to find ways to proceed despite them.
10. Share and discuss findings as the project progresses.
For information to be useful it must be shared; and for it t be put into practice, it must be absorbed while the work is still underway. For these purposes, final reports are too late. The time for sharing lessons and information is often much earlier than originally planned. Circulating drafts and looking for informal discussion of preliminary findings often serve as the basis for clarifying issues, for understanding better the meaning of what is being learned, and for informing participants. Furthermore, if participants are given the opportunity to learn along the way, they will cooperate and even invest in the evaluation process in order to make it as current and credible as possible. For all of these reasons, effective community evaluators try to share what is being learned before it reaches the report stage, so that participants are given an opportunity to discuss, digest, and respond to new information in a way that makes sense. By doing so, they also increase the quality of the analysis and reflections.
Community-sensitive evaluators often prepare final reports for outside audiences, while midterm and interim reports are for internal ones. Furthermore, discussions along the way about what everyone is learning reinforces the sense in which participants contribute to and benefit directly from the process. Waiting for results until the end simply doesn’t fit with what is being suggested here as a more appropriate, learning-oriented evaluation.