Meta-evaluation design through the lens of a generic Grand Challenge intervention model

The opportunity to carry out a meta-evaluation of USAID’s 10 Grand Challenges (GCs) for Development was one not to be missed for Triple Line’s evaluation and fund management teams. We had recently completed an Evaluation of Sida’s Global Challenge Funds, had analysed similar open innovation approaches in our evaluations of the Global Innovation Fund and the Amplify Fund, and over many years of fund management practice, evaluation and research have established ourselves as leading thinkers in approaches to the evaluation of challenge funds. Added to this, was the chance to work with a new client, USAID, that has been a pioneer in the use of grand challenges for development cooperation.

Since 2011, USAID’s Grand Challenges for Development programme has mobilised governments, companies and foundations around 10 important international development problems, to source new solutions, test new ideas, and scale what works. All GCs offer challenge grants, but many use additional tools depending on the problem they intend to solve. These can include prizes, hackathons and capacity building services. The model also leverages donors’ ability to de-risk investments and crowd in follow-on funding from both the public and private sectors once a development innovation has been proven to be effective and commercially viable. Together, the Grand Challenges partners have committed more than US$508 million in grants and technical assistance to fund more than 450 innovations in 60 countries.

After more than eight years of implementation, USAID is taking a systematic look at the achievements and challenges of the GCs and their open innovation approaches in the areas of health, education, agriculture, energy, food security and conflict and violence. The evaluation will generate an actionable evidence base to inform strategy recommendations and practical, adaptable measurement frameworks that can be applied by USAID and its partners to guide investment decisions and advance the design, management and measurement of GCs. Our team of innovation, challenge fund, cost effectiveness and multi-donor evaluation experts bring an established track record in complex evaluation, drawing on deep experience of the design, management and evaluation of challenge and grant fund schemes; combined with a participatory, utilisation-focused approach, making effective use of co-creation processes for successful take-up of recommendations.

Our generic intervention model provides a framework against which each GC can be overlaid, enabling similarities and differences to be analysed comparatively and against OECD DAC criteria

Between March and September 2020, the evaluation team worked on the design and planning phase of the evaluation. Following preliminary data collection and consultation activities, we drew on our deep knowledge of challenge fund approaches, theories, key dilemmas and implementation constraints, to develop a generic Grand Challenge intervention model which set out how each challenge is intended to go from initial design, through delivery to achievement of objectives. The model was developed iteratively together with USAID and GC managers, and provides a framework against which each GC can be overlaid, enabling similarities and differences to be analysed comparatively and against OECD DAC criteria.

We built in broad flexibility to allow for different online platforms, timing and preparation, connectivity and bandwidth issues, as well as well-developed protocols for facilitation of online consultations

During inception, our team had to adapt to the restrictions posed by the onset of Covid-19. We restructured the evaluation methodology to accommodate the need to work remotely. We facilitated a web-based interactive prioritisation workshop with GC managers in the USA and Canada and with USAID personnel in Washington DC to review our evaluability assessment and agree priority questions for the evaluation. Planned country visits for case studies were replaced with remote consultation, and we built in broad flexibility to allow for stakeholder preferences regarding online platforms (for conferencing, interactive presentation, visual collaborative working – with backup options in place), timing and preparation (including prior testing of technologies), any connectivity or bandwidth issues, as well as robust protocols for the facilitation of online interviews and workshops that we developed and refined over the course of the pandemic. Drawing on our track record of flexible, adaptive evaluation approaches, we were able to design, implement, appraise and redesign tools in response to changing circumstances in an iterative fashion to meet present needs and priorities.

In spite of the logistical challenges the evaluation faced in its first six months, flexibility on the part of all partners together with rapid adoption of a range of digital technologies enabled us to design a robust, participatory evaluation focusing clearly on the priority information needs and concerns of USAID and its partners, and structured around a detailed, flexible roadmap for data collection and analysis in the months to come.

Credit: title photos by The Futurepump and Morgana Wingard for USAID

The Futurepump: low-cost solar irrigation pumps for the world's one-acre farmers. Supported by USAID – Powering Agriculture: An Energy Grand Challenge for Development.

Fighting Ebola: using design thinking to help fight Ebola. A worldwide open innovation challenge to generate new solutions for the healthcare workers, care-givers and communities on the frontlines of the Ebola crisis.