The Challenges of Evaluating Humanitarian Programs

Humanitarian crises – earthquakes, the resurgence of armed groups, drought, state collapse, and the like can cause extreme human suffering and the loss of innumerable lives. They demand a rapid response from private, public, and non-profit organizations to save lives and alleviate intense suffering. Typically, the response is planned quickly, heavily funded, and implemented in complex physical and social environments. As an evaluator, it is our responsibility to enter these complex situations and improvised projects to use results-based methods to capture impact – both positive and negative. It requires an experienced evaluator to go beyond the realm of detailed logic models, systematic monitoring, and stable environments to truly understand and demonstrate the results of these programs. 

In response to the complex nature of evaluating humanitarian interventions, a UK-based non-profit “Active Learning Network for Accountability and Performance (ALNAP)” published a guide for best practices in performing evaluations for complex humanitarian interventions. Within this guide, ALNAP outlines nine of the main challenges in performing evaluations in these volatile contexts. From these nine challenges, we have identified the top three greatest challenges we have encountered in our work, and innovative methods that we apply to mitigate these constraints 

1. Urgency and Chaos of Humanitarian Interventions


Due to the crisis environment of humanitarian interventions, they are usually planned quickly and implemented in a rapidly changing context. This means that planned activities, monitoring tools, and operations are often quite different in reality than as recorded in the project proposal – if there was a proposal at all.  In our experience, there must be a willingness from partners to work in this flexible environment and provide as much contextual information as possible. The recently growing evaluation method “Outcome Harvesting” and RTE (Real time evaluations) can be more effective in these kinds of situations, effectively capturing what happened (through interviews and group discussions) and working backwards to determine activities that contributed to the outcome. We also de-emphasize the importance of consistency in monitoring and consistency of indicators determined at the beginning of the intervention – placing a greater focus on the beneficiaries’ feedback and the positive and negative impacts of the intervention

2. Access to Participants and Data Collection Sites 


Enumerator crossing a local bridge during a data collection assignment in The Democratic Republic of Congo.

One of the second greatest challenges that we have encountered during data collection is the ability to access participants for key informant interviews and focus group discussions. However, we have discovered that one of the greatest advantages of the rapid digitization of the globe is that we are able to effectively conduct sampling, training, and interviews remotely. During a previous assignment in a highly volatile conflict zone in Masisi, a town in the North Kivu Province DRC, we were able to use the snowballing sampling strategy to gather primary data from project participants. In the snowballing strategy, at the end of a phone interview, the interviewee is asked for the contact information of another participant or key informant connected to the project. Just as a snowball rolling down a hill accumulates more volume as it rolls, we have successfully been able to access a number of program participants in highly insecure zones.
Additionally, remote locations and damaged infrastructure can pose a challenge when conducting evaluations -such as washed out roads, damaged bridges, and lack of electricity. In our experience, these challenges can be mitigated by planning logistics in partnership with locals, allowing enough time for transportation/delays (Ex. roadblocks or re-routing), and bringing proper equipment. In fact, OCI prioritizes assignments in locations where we have a pre-qualified team of supervisors, consultants, and enumerators who we are able to rely upon for their knowledge of the local context, adaptability, and real-time updates. In our case, we have found that bringing a portable router prevents delays due to lack of network/wifi and bringing a charging source allows us to keep audio recording devices fully charged

3. Tense Environment and Ethical Considerations

Humanitarian interventions bring a dimension of tension and unease that must be managed to collect quality, ethical, and timely data. In some cases, evaluators will form focus groups or initiate key informant interviews where people are unwilling to share information, thoughts, or feelings. This can be further exacerbated by ethnic, political, gender, and other group tensions.

Key Informant Interview

In our experience and research, we have discovered a few methods to increase trust and minimize the impact of tensions during the interview process. The first step is to be intentional with the sampling strategy for focus group discussions. Intentionally disaggregating by gender, ethnicity, tribal membership, or disability can either serve to stimulate discussion, or further alienate minorities. Therefore, the evaluator must be familiar with the country’s context and inter-group tensions. For example, while it may be beneficial to separate men and women for a discussion on the impacts of gender or livelihoods programming, it may not be beneficial to separate ethnic groups in discussions on community peacebuilding. Secondly, it is important to ensure a gender sensitive match between participants and the interviewer, such as a woman facilitating dialogues with female participants. 

Overall, evaluating humanitarian programs can be challenging due to the nature of the crisis or emergency and the urgency with which programs are designed. However, it is critical to ensure that the aid provided is effective and appropriate (among other metrics), and evaluations are essential in achieving this goal. By understanding and addressing the challenges faced in evaluating humanitarian programs, evaluators can help improve the impact of aid.

Works Cited: 

Buchanan-Smith, M., Cosgrave, J., & Warner, A. (2016)-  Evaluation of Humanitarian Action Guide. ALNAP Guide. London: ALNAP/ODI.