Why does measurement and the type of metrics we use matter? In a rapidly changing and complex world, we need to leverage data-driven insights to prove our approaches and programs create lasting impact for the clients we serve.
Measurement was a key theme during our Lean Impact for Ag event1. This blog post is the second in a three-part series highlighting key takeaways from the event. In case you missed it, check out our first post on experimentation in smallholder agriculture. You can watch a full recording of the event here (password: andeag).
Let’s talk measurement – why do metrics matter?
In the book Lean Impact, Chang defines the challenge of measuring impact in the social sector as compared to the private sector: “Satisfying your user will increase profits and delight investors. But in the social sector, what people want, what will make the greatest impact, and what funders will pay for are not always the same.”
This balance of priorities is further complicated by factors distinct to development2:
- Monitoring and evaluation metrics tend to be geared towards compliance and accountability to prove an intervention’s success rather than adaptive decision-making and learning
- Impact measurement is far more complicated for large-scale, donor-funded development programs than measuring e-commerce transactions or user behaviour
- There are higher stakes of risk-taking, as failure or unintended consequences could jeopardize funding for implementers or make things worse for vulnerable people
Despite the complicated nature of our industry, tackling the way we measure, what we measure, and why we measure brings us back to the mission and vision that guides the work we do in development. Parallel learning and decision-making can and should occur when programs adopt a customer centric mindset and approach to measuring and understanding whether it is meeting its goals and being effective in its target sector and client group.
Whether it’s private sector development, women’s economic empowerment, impact investing, or smallholder finance, if we’re not using the right metrics in program lifecycles, how do we know we’re really making an impact?
Choosing metrics that feed innovation, not vanity
We don’t often talk about the phrase, “vanity metrics” in the development sector – but when we look to other sectors, we can learn valuable lessons and transferrable insights.
Eric Ries, author of The Lean Startup, describes vanity metrics as data that quantify activity and look good, but are not action-oriented. For example, an organization may promote that it reached 100,000 smallholder farmers. This aggregate level statistic may appear impressive, but what impact does it really show?
Chang builds on this concept in her book: “Vanity metrics tend to reference cumulative or gross numbers as a measure of reach … on the other hand, innovation metrics measure the value, growth, or impact being delivered at the unit level. For a mission-driven organization, the equivalent metrics are the unit costs along with the unit yields.”
Related research have argued for redefining concepts of scale to better assess development impact by including measures of the spread of a behaviour (or benefit) and how that spread was achieved across the target population.
During the Lean Impact for Ag event, Chang encouraged participants to integrate lean approaches and mindsets at all levels of an organization:
- Focus on unit-level metrics rather than aggregate-level metrics. Rather than focusing on total number of farmers engaged, spend time and energy on unit-level metrics for value, growth and impact, such as the rate of adoption, engagement, and success that will create the intended development impact;
- Shift the conversation and tone of team or leadership meetings. Ask yourselves: are we improving the conversion rate of farmers who express interest and become active clients in programs? Are we improving the rate of farmers adopting recommended agricultural practices? These types of metrics support learning and illustrate how programs are or are not making progress.
Another way to approach lean impact is to understand the role of aggregate-level metrics (reach) in monitoring and evaluation efforts. The development industry is not going to stop measuring the number of smallholders or entrepreneurs supported in emerging and frontier markets; nor will Fortune 500s or start-ups stop benchmarking their client numbers and reach.
But my challenge to our industry is this: before referencing the number of smallholders trained or investments made in SMEs on websites and project reports, let’s think about what the data represents and acknowledge that they are insufficient on their own.
How should we start integrating innovation metrics if this data is not already being collected? INNOVATE’s work with our partner in Malawi reveal some promising pathways.
Learning from our partners – an example from Malawi
As part of the INNOVATE project, our team encouraged partners to track what they learned from customer centric approaches to measurement.
Agronomy Technology Limited’s (ATL) experience with measuring the effectiveness of a model providing input loans, information and training on good agricultural practices (GAP), and marketing services in Malawi shows how customer centric unit-level metrics can generate new insights. ATL analyzed rates of GAP adoption, loan repayment, retention, and use of marketing services rather than farmer reach or aggregate loan amounts.
99% of respondents in ATL’s recent case study indicated that while GAP information and training was perceived as useful, only 21% adopted the full set of recommended practices, primarily because they were too labour intensive. Women’s adoption rates were much lower than men’s (15% vs. 28%). These insights reveal the value of testing and exploration to better measure and address farmer labour constraints.
The ATL case demonstrates how focusing on innovation or unit-level metrics tells us so much more than vanity metrics. Had the case study only reported on the number of farmers that received input loans or GAP training, it would not have told us much about the challenges and opportunities to improve service delivery and overall adoption.
I’ll wrap up this post with an encouragement from Chang’s book and the questions we can ask ourselves and our colleagues about learning and measuring impact and performance:
“Improvement on innovation metrics doesn’t always proceed linearly, so shorter-term progress might measure the pace of learning for both teams and individuals. Are interviews being conducted, experiments run, and data collected? How quickly are hypotheses being proven or disproven? Are sufficient risks being taken so that we are seeing both successes and failures? Do we pivot quickly when the data indicates a particular path is unlikely to produce the results needed? These can also be set as objectives, reinforced in performance reviews, and celebrated in meetings to reorient the culture3.”
The next post in this series will build on this conversation and address organizational cultural change and systems change + reform we need in our industry to better understand and serve the world’s smallholder population. Stay tuned!
1MEDA INNOVATE (3-year project funded by IDRC), in partnership with ANDE’s Agribusiness Learning Lab, hosted a learning event this past June on Lean Impact for Ag: Transforming how we approach, design, and fund solutions for smallholders to reflect on these lean principles.
The event built on INNOVATE’s learning agenda and goal of engaging with and influencing stakeholders in the agriculture sector. The speakers were Ann Mei Chang, author of the book Lean Impact; Rocío Pérez Ochoa, Co-Founder and Director of Bidhaa Sasa; and Colin Christensen, Global Policy Director at One Acre Fund.
Speakers shared perspectives, examples of learning, and questions around systems change that are pertinent across the global development sector, and especially relevant for smallholder agriculture programs and policies.
2Lean Impact, page 22
3Lean Impact, 237.