The Challenges of Evaluating Humanitarian Programs
Humanitarian crises – earthquakes, the resurgence of armed groups, drought, state collapse, and the like can cause extreme human suffering and the loss of innumerable lives. They demand a rapid response from private, public, and non-profit organizations to save lives and alleviate intense suffering. Typically, the response is planned quickly, heavily funded, and implemented in complex physical and social environments. As an evaluator, it is our responsibility to enter these complex situations and improvised projects to use results-based methods to capture impact – both positive and negative. It requires an experienced evaluator to go beyond the realm of detailed logic models, systematic monitoring, and stable environments to truly understand and demonstrate the results of these programs.
In response to the complex nature of evaluating humanitarian interventions, a UK-based non-profit “Active Learning Network for Accountability and Performance (ALNAP)” published a guide for best practices in performing evaluations for complex humanitarian interventions. Within this guide, ALNAP outlines nine of the main challenges in performing evaluations in these volatile contexts. From these nine challenges, we have identified the top three greatest challenges we have encountered in our work, and innovative methods that we apply to mitigate these constraints
1. Urgency and Chaos of Humanitarian Interventions
Due to the crisis environment of humanitarian interventions, they are usually planned quickly and implemented in a rapidly changing context. This means that planned activities, monitoring tools, and operations are often quite different in reality than as recorded in the project proposal – if there was a proposal at all. In our experience, there must be a willingness from partners to work in this flexible environment and provide as much contextual information as possible. The recently growing evaluation method “Outcome Harvesting” and RTE (Real time evaluations) can be more effective in these kinds of situations, effectively capturing what happened (through interviews and group discussions) and working backwards to determine activities that contributed to the outcome. We also de-emphasize the importance of consistency in monitoring and consistency of indicators determined at the beginning of the intervention – placing a greater focus on the beneficiaries’ feedback and the positive and negative impacts of the intervention
2. Access to Participants and Data Collection Sites
3. Tense Environment and Ethical Considerations
Humanitarian interventions bring a dimension of tension and unease that must be managed to collect quality, ethical, and timely data. In some cases, evaluators will form focus groups or initiate key informant interviews where people are unwilling to share information, thoughts, or feelings. This can be further exacerbated by ethnic, political, gender, and other group tensions.
In our experience and research, we have discovered a few methods to increase trust and minimize the impact of tensions during the interview process. The first step is to be intentional with the sampling strategy for focus group discussions. Intentionally disaggregating by gender, ethnicity, tribal membership, or disability can either serve to stimulate discussion, or further alienate minorities. Therefore, the evaluator must be familiar with the country’s context and inter-group tensions. For example, while it may be beneficial to separate men and women for a discussion on the impacts of gender or livelihoods programming, it may not be beneficial to separate ethnic groups in discussions on community peacebuilding. Secondly, it is important to ensure a gender sensitive match between participants and the interviewer, such as a woman facilitating dialogues with female participants.
Overall, evaluating humanitarian programs can be challenging due to the nature of the crisis or emergency and the urgency with which programs are designed. However, it is critical to ensure that the aid provided is effective and appropriate (among other metrics), and evaluations are essential in achieving this goal. By understanding and addressing the challenges faced in evaluating humanitarian programs, evaluators can help improve the impact of aid.
Buchanan-Smith, M., Cosgrave, J., & Warner, A. (2016)- Evaluation of Humanitarian Action Guide. ALNAP Guide. London: ALNAP/ODI.