Approval for any innovation project requires satisfactory answers to three key questions:
This blog entry tackles the 1st question and shares learnings from six recently-completed open-innovation challenges where SeaFreight Labs served as Project Advisor. These challenges were run by Habitat for Humanity and World Vision to solve long-standing humanitarian problems related to the work of each organization [See Note 1 below]. They were run in late 2020 and into 2021 on the InnoCentive platform and with the InnoCentive crowd (www.innocentive.com).
This blog entry contains the following sections:
Introduction
It is important to begin this discussion by agreeing on the proper definition of ‘work’ in the context of a humanitarian challenge. As in, ‘did the humanitarian challenge ‘work’?’ It would be fantastic if running an open-innovation challenge could single-handedly vaporize a long-standing situation so that a specified problem was instantaneously eliminated. However, this is not a reasonable expectation and is unlikely to happen.
In my view, the gold-standard for success for any open-innovation challenge is that:
The published problem is solved to the Seeker’s satisfaction; AND,
The Seeker is positioned to take the necessary steps to bring the solution forward so it is widely adopted within a reasonable time period.
The Seeker’s opinion of the proposed solutions is made known in the Seeker’s decision on whether to award a financial prize to obtain the intellectual property of one or more submissions. A Seeker decision to award no prize money indicates that no Solvers submitted proposals that were better than the Seeker’s status quo. Likewise, a Seeker decision to award prize money indicates that the Seeker felt that winning solutions were valuable enough that the Seeker wanted access to the intellectual property in the winning submissions.
Regarding the 2nd requirement for success, the ability to move forward with an idea depends on the framing of the problem initially proposed and what types of permanent solutions might ‘solve’ the problem. I have a predisposition toward action so, for the purposes of this exercise, I will postulate that a challenge which leads to field testing is ‘success’. Any other outcome will be considered to be one where the challenge did not ‘work’.
This harsh evaluation filter is something that I developed over the process of advising these six open-innovation challenges. When we started, I was open to seeking innovation which might move issues forward, even if only in a general sense. Now, at the end of a long process, I am exclusively focused on actionable results. This blog entry reflects this new learning.
The Six Challenges
It has been a great privilege to work alongside two fantastic organizations as they experimented with open innovation as a new tool to support them in their humanitarian work. Both Habitat for Humanity and World Vision deserve praise for looking for new ways to do their work and to try to do it better. All of the staff at each organization devoted countless hours to each challenge with the hope of better serving their beneficiaries.
The table below is a summary of the six humanitarian challenges discussed in this blog. By the standards of success defined above, only one of these challenges did not ‘work’ (#6). For this challenge, no suitable solutions were proposed so the problem is unsolved. There were a number of creative ideas proposed but none met all of the stated requirements.
Table 1.
* “HFH” = “Habitat for Humanity”; “WV” = “World Vision”.
** “RTP” = “Reduction to Practice”.
*** Ideation challenge would never have a prototype created because the submissions are conceptual in nature.
The other five challenges (#1, #2, #3, #4 and #5) all have follow-on activity planned for the winning solution(s). This means that the process of running the open-innovation challenge identified new solutions or strategies to long-standing humanitarian problems and the Seekers are excited enough about the solutions to invest further time and money into field testing to try to create access to commercially-available solutions.
A success rate of 50-83% (depending on whether challenges #2 and #3 are able to initiate field testing) on any innovation-related process is VERY good. The high success rate is a direct result of the InnoCentive-led process and the efforts of all involved. In the next sections, I will talk about key steps along the way that I believe led to these outstanding results.
Click HERE to learn about the winners of each of these challenges.
Critical Success Factor #1 – Problem Selection
It turned out to be very difficult for a global humanitarian organization to select ‘crowd-solving-worthy’ problems from their global set of difficulties and challenges. The problem can’t be too big, like ‘solve global hunger’. The problem can’t be too small, like ‘how to fix a leaky roof in our HQ?’. The problem can’t be too general, like ‘how to be more efficient?’ In short, there are many types of problems where crowd-solving will not deliver a success.
With the guidance of our InnoCentive team, we ended up with 83% of our problems where the global crowd could deliver a solution. These problems shared the following characteristics:
They were high-impact problems where a solution could benefit hundreds of thousands or millions of people.
The Seeker had a clear understanding of minimum requirements for a workable solution and could describe them in a way that was externally verifiable.
They looked for new, lower-cost ways to accomplish familiar outcomes.
They were easy to understand and it was easy to see why a solution would be valuable.
They didn’t require the solvers to invent any new technology.
We rejected scores of nominated problems during the initial screening phase of the project because they didn’t meet one or more of the characteristics above. In hindsight, I feel that this was a vital contributor to our eventual success; the ability to reject proposed problems and to wait for ‘good’ ones is a necessary condition for success.
Critical Success Factor #2 - Challenge definition
The challenge definition is what a potential solver reads and responds to. It must be written to attract the attention of as many people as possible. At the same time, it must be concise in its requirements so that the Seeker is confident that a solution that meets the stated requirements will be one they want to move ahead with to the field-testing stage. It turns out that it is quite difficult to achieve both of these objectives in one 6-8 page document.
The decision about what type of challenge to run is also very important. InnoCentive offers ideation, theoretical and reduction-to-practice (RTP) types. These different types require growing levels of detail in the submissions, which can lead to reductions in the number of responses. Having run each type of challenge, I am heavily predisposed toward the RTP type because it communicates to potential Solvers that we are looking for a working solution, not just a concept. However, our two ideation challenges, dealing with less-defined but massively impactful issues, both awarded prize money and both are likely to lead to field testing of winning concepts. So, the problem type will dictate the proper challenge type and this is the best-practices process that I still recommend.
Selecting a cost target for a humanitarian challenge is demanding because the goal must be substantially below current cost options (to deliver a dramatic improvement, if a solution is found) while still being reasonable enough not to scare away potential Solvers. We spent many phone discussions trying to find an aggressive cost target that would not seem unreasonable.
Being clear and complete on the challenge requirements is also vital to getting a solution that will actually have a good chance to gain adoption in the future. Potential solvers come from all over the world and few know much about the humanitarian problems underlying the challenge. Therefore, the challenge definition must take great pains to fully and completely identify every mandatory requirement so that the proposed solutions will deal with all of the relevant issues.
Critical Success Factor #3 - Crowd recruitment
A preeminent success factor in crowd-solving is the quality of the ‘crowd’ that reads the challenge definition, and hopefully, knows enough about the problem to submit a winning solution.
In these challenges, SeaFreight Labs invested significant effort into identifying potential audiences that might have an interest in the challenges, and have the necessary skills to be able to make quality proposals. We partnered with 3rd-party organizations such as HeroX, Engineering For Change, ResearchGate and the Maker Community to promote the challenges and invite participation from the general public. The data below suggests that these efforts were fruitful.
Our five solved challenges had a strong mix of Solvers that came from the InnoCentive crowd and an equally strong set that joined InnoCentive in order to submit a proposal (called “New Solvers”). Figure 1, below, shows the distribution of these two groups across the stages of the challenges.
Figure 1.
In this set of challenges, two of the three solutions heading to in-house field testing came from the InnoCentive crowd (67%) while the third one joined InnoCentive in order to submit a proposal to the challenge. The other two challenges with winners had their winning solvers come from the general public.
These results lead to two important conclusions about the crowd:
The InnoCentive crowd is a valuable asset to any Seeker that is executing a humanitarian challenge. The InnoCentive crowd is likely to register for a challenge and submit quality proposals that will proceed far in the evaluation process.
It is a good idea to make efforts to supplement the InnoCentive crowd with other potential solvers. Promotion efforts by the Seeker through their channels and targeted promotion to appropriate audiences will complement the InnoCentive crowd and is likely to produce additional high-quality submissions.
Conclusion
I have used this blog entry to share my definition of ‘success’ for our recent humanitarian challenges and to talk about how the reader might benefit from my experiences. My goal is to improve the reader’s odds of achieving ‘success’ on future humanitarian crowd-solving.
For a moment, stop and reflect with me. For a mid-five-figures (USD) out-of-pocket investment (including prize money, and calculated as part of a multi-challenge engagement), a global humanitarian organization was able to poll people from all over the world about long-standing, and important problems. In 83% of the cases, a skilled somebody from another country engaged with the problem and proposed a solution that the Seeker felt was good enough to plan an internal field test. The Solvers of these five challenges came from 8 different countries and none was from the US, the home office of both Seekers. I am hard pressed to think of another way that the Seekers could have achieved this positive outcome with such a small initial investment.
It is dangerous to extrapolate from a set of only six experiences. But it is perhaps even more dangerous to proceed without any external guidance when one is trying something new at one’s organization. Please leverage my experiences aggressively to give your open-innovation efforts the best chances for success!
Note 1:
Habitat for Humanity’s were:
1. “Increasing Resilience to Earthquakes and Typhoons for Homes with No Foundations” which launched as an RTP (“Reduction-To-Practice”) challenge on 7 October 2020 and closed on 5 January 2021. The challenge prize was US$25,000 with the winner announced on 28 September 2021.
2. “Improved Construction and Demolition Waste Management” which launched as an ideation challenge on 26 October 2020 and closed on 25 January 2021. The challenge prize was US$15,000 and was awarded on 22 July 2021.
3. “Affordable Water Harvesting for Low-Income Households in Urban Areas” which launched as an RTP challenge on 2 April 2021 and closed on 5 July 2021. The challenge prize was US$25,000 with the winner announced on 20 January 2022.
World Vision’s were: 1. “Affordable Rural Single Family Sanitation Solutions” which launched as an ideation challenge on 14 October 2020 and closed on 12 January 2021. The challenge prize was US$15,000 with the winners announced on 29 June 2021 2. “Low-Cost Chlorine Monitoring for Rural Piped Water Systems” which launched as an RTP challenge on 4 November 2020 and closed on 4 February 2021. The challenge prize was US$20,000 with the winner announced on 29 January 2022. 3. “Efficient and Reliable Counting of Improved Latrines” which launched as an RTP challenge on 9 June 2021 and closed on 7 September 2021. The challenge prize was US$20,000. The challenge had no winners and was withdrawn.
See www.seafreightlabs.com/our-challenges for more details.
Comments