Incentive prizes, deceptively simple in concept, are often challenging to construct in a way that drives the desired outputs and supports the desired outcomes. How can prize designers get it “right”?

Executive summary

In the last five years, incentive prizes have transformed from an exotic open innovation tool to a proven innovation strategy for the public, private, and philanthropic sectors. Incentive prizes seem deceptively simple: Identify a problem, create and publicize a prize-based challenge for solving that problem, sign up diverse participants, and offer a reward to the winner. In practice, designing prizes that target the right problem, attract the most capable participants, and capture the imagination of the public to successfully achieve a desired outcome involves a complex set of design choices. This report aims to help prize designers organize and master those choices. In the past, designers thought of prize types as distinct tools, often seeking to match the right tool to the problem they were seeking to address. Now, prize design has become a craft. Experienced designers help their organizations achieve a range of outcomes by building highly customized prizes and deploying them in concert with other problem solving and public engagement strategies. They focus less on what type of prize to use and more on how to assemble the fundamental elements of prize design through a series of integrated design choices informed by research and analysis. While this approach is understandably more complex than simply pulling a prize out of a toolbox, it also enables more sophisticated prize designs, allowing organizations to more effectively get what they need. The craft of incentive prize design offers practical lessons for public sector leaders and their counterparts in the philanthropic and private sectors. It helps them to understand:

What types of outcomes incentive prizes help to achieve

What design elements prize designers use to create these challenges

How to make smart design choices when launching an incentive prize to achieve a particular outcome

This report treats prize design not as a linear, step-by-step process, but rather as an iterative activity that requires making integrated choices to solve a carefully defined problem and then generating outputs that achieve a larger set of outcomes. By synthesizing insights from recent literature, expert interviews, and analysis of over 400 prizes, we identify six outcomes that designers commonly seek (individually or in combination), falling along two dimensions: Developing ideas, technologies, products, or services

Attract new ideas

Build prototypes and launch pilots

Stimulate markets

Engaging people, organizations, and communities

Raise awareness

Mobilize action

Inspire transformation

The first dimension captures the range of conceptual and tangible things which designers are trying to develop. The second reflects how prizes can incent individuals, groups, organizations, and institutions to get involved in solving important public sector problems. In most cases, incentive prizes aim for outcomes on both dimensions. Looking at prizes through the lens of outcomes allows designers to establish a stronger link between what their organizations are trying to do and the benefits that prizes can help generate. We use the phrase “elements of prize design” to describe and organize the strategic choices that designers should consider when crafting incentive prizes. There are five core design elements: resources, evaluation, motivators, structure, and communications. The heart of this report features practical decision-oriented frameworks for designers, helping them understand how they can tailor prize design elements to facilitate different outcomes and increase the effectiveness of their challenges. Through decision-oriented frameworks that link outcomes to design elements, The craft of incentive prize design enables public, philanthropic, and private sector leaders to build better prizes. The report helps these leaders benefit from the recent experiences of designers who are advancing the art of incentive prize design in the service of the public good. By accessing these experiences, illustrated with recent examples of successful prizes, designers can more effectively harness the ingenuity of the public to address their most vexing challenges.

From the development of new technology prototypes to the reduction of energy consumption to challenges that help prevent child slavery, prize designers are capturing the public’s imagination and unlocking their creativity.

Introduction

On January 12, 2010, a 7.0-magnitude earthquake devastated Haiti, affecting three million people and destroying significant portions of the nation’s fragile infrastructure. Many organizations flooded Haiti with support, offering different forms of assistance to rebuild the country.1 Among them were USAID and the Gates Foundation, which recognized the critical need for jumpstarting financial services, the backbone of any functioning modern economy. Working together, they designed a prize that incented new mobile money service providers to launch in Haiti and achieve specific operational and transactional milestones.

Called the Haiti Mobile Money Initiative, this incentive prize helped stimulate a new mobile money market in Haiti, where, before the earthquake, only 10 percent of the population used traditional banks. It featured $10 million in awards, broken into different-sized purses, some to incent first-to-market services and others to encourage scaling customer adoption. To increase the initiative’s effectiveness using more traditional sources of aid, USAID contributed $5 million in technical and management assistance. Within six months of launch, two mobile banking service providers were up and running. By October 2011, both participants won scaling awards for more than 100,000 transactions;2 less than a year later, one participant achieved the 5 million transaction milestone.

The Haiti Mobile Money Initiative powerfully illustrates how the craft of prize design has rapidly evolved in recent years, thanks to public, private, and philanthropic organizations that are using prizes to innovate in the service of the public good. Leaders and prize designers in these organizations are learning through experience that incentive prizes can meaningfully advance their missions. In the process, they are also discovering that successful prize design involves a complex series of choices to attract the right competitors with the knowledge and experience needed to solve a wide range of complex problems.

While prizes confer many benefits, their primary appeal is allowing leaders to pay only for satisfactory results. Competitors receive compensation, in whatever form it may take, only if they meet the evaluation criteria established by the prize’s designer. That is not to say that incentive prizes can or should be used only when results can be guaranteed. Some designers are following a higher-risk, higher-reward strategy of using prizes to achieve goals that cannot be specified in advance. As budgets tighten in every sphere while the demand for innovation is rising, prizes are becoming recognized as a promising method to address an array of problems, often more efficiently and effectively than traditional approaches. Government and philanthropic leaders view prizes as a vehicle to drive change and describe the experience of implementing a prize to be a valuable problem-solving exercise in and of itself.

While prizes are not new, the idea that prize design is a craft that organizations should master to launch successful challenges has gained significant currency. The White House’s Office of Science and Technology Policy dedicates personnel to help federal departments and agencies design effective prizes. More than 50 federal departments and agencies have offered prizes and some federal agencies have added dedicated staff for prize design as well.3 Many major US philanthropies are using prizes to advance their missions, as are dozens of state and local agencies throughout the United States. Organizations with an international focus, such as the World Bank, use prizes to drive innovation in the developing world. Specialized advisory service firms now provide prize technology platforms and strategic guidance to organizations that lack the skills and capabilities needed to design and implement their own challenges.

All of this activity has yielded lessons and strategies that can help designers make choices about what kinds of prizes can generate specific outputs in the service of larger outcomes. They are learning to use prizes to achieve different but mutually reinforcing outcomes and finding ways to match prizes with other complementary problem-solving strategies. By experimenting with the elements of prize design and building on what they learn, designers have begun creating more sophisticated prize structures that can engage a broader group of qualified participants through multiple stages of competition. From the development of new technology prototypes to the reduction of energy consumption to challenges that help prevent child slavery, prize designers are capturing the public’s imagination and unlocking their creativity.

Designers commonly seek six outcomes (individually or in combination) that fall along two dimensions:

Developing ideas, technologies, products, or services

Attract new ideas

Build prototypes and launch pilots

Stimulate markets

Engaging people, organizations, and communities

Raise awareness

Mobilize action

Inspire transformation

The first dimension captures the range of conceptual and tangible things which designers are trying to develop through incentive prizes. The second reflects how prizes can incent individuals, groups, organizations, and institutions to get involved in solving important problems. These dimensions represent intermediary and complementary outcomes that can be achieved during and after the execution of a prize. Looking at prize design through the lens of outcomes allows designers to establish a stronger link between what their organizations aspire to do and the specific outputs that prizes can generate.

By using prizes to achieve these outcomes, designers are generating whole innovation ecosystems. Prizes build and maintain communities of interest that help organizations address complex, ambiguous problems. Prizes educate the public and encourage citizen participation in new and dynamic ways. Prizes create opportunities for public organizations to share costs with private and philanthropic partners. They foster collaboration among government, academia, the private sector, and individuals.4 Some organizations even use prizes to shape commercial markets, either to develop technologies, goods, and services directly or to bring innovative prototypes to market for the first time. Most importantly, prizes also demonstrate that government can innovate in service of the public good and open up problem solving to leverage the ingenuity of citizens and businesses.

By drawing upon the rich challenge activity of the past five years, we aim to help designers understand what they can do with prizes, and how—practically speaking—they can assemble prize design elements in different ways to achieve these outcomes.

As the use of prizes grows, the language of prize design is becoming more specialized. One important and increasingly common distinction involves “outputs” and “outcomes.” Prize designers use “outputs” to describe the specific end results of a prize, such as a software application (app) with particular functionality, the formation of a technical community around the development of that app, and even insights about how to improve that type of prize implementation. In contrast, prize designers use “outcomes,” as we’ve described above, to reference more general and aspirational goals, which can be fulfilled by a prize as well as other approaches. For example, an agency may wish to pursue the outcome of building prototypes and launching pilots by designing and executing a prize that generates an app as an output. To achieve that outcome, the same agency may need to find ways to generate other, complementary outputs, such as a marketing campaign that introduces the app to target audiences. In sum, “output” and “outcome” help designers to distinguish tactical results from strategic objectives.

The growing appeal of prizes has also generated definitional confusion. “Prize” and “challenge” are often used interchangeably, making it difficult to distinguish between how these terms represent different ways to solve problems and, in the US government context, what legal authority permits an agency to solve a problem in a particular way. In this report, we will treat “challenges” as an umbrella term for a variety of problem solving approaches, including incentive prizes, grants, direct investments and partnerships, to name a few. While incentive prizes will be our focus, we will also draw lessons from different types of challenges, such as competitive grants, that can be applied to the craft of incentive prize design. We recognize that these distinctions are further complicated by the fact that US government agencies must conduct different types of challenges under specific legal authorities, such as the America COMPETES Act.5 Federal leaders should consult their offices of general counsel to determine what legal authorities govern their ability to stimulate innovation, acquire particular goods or services, conduct research for the public good, or work with private organizations for mutual benefit.

It has been five years since the advisory services firm McKinsey & Co. published the first major report on the use of prizes for philanthropy.6 Since then, the US government alone has administered over 350 prizes. The prize typology featured in McKinsey’s report had a significant influence on the first generation of public sector and philanthropic prize designers who needed an organizational structure to understand what kinds of prizes were possible to implement and when to use them. Because many designers still reference McKinsey’s typology, this report seeks to build on it by focusing on the overarching outcomes that designers are trying to achieve and the fundamental elements of prize design that experienced designers use. By drawing upon the rich challenge activity of the past five years, we aim to help designers understand what they can do with prizes, and how—practically speaking—they can assemble prize design elements in different ways to achieve these outcomes.

This report explores the craft of designing incentive prizes and shows how design choices can influence a prize’s ability to solve vexing challenges by drawing links between outputs, outcomes, and the elements of prize design. While we focus on public sector incentive prizes in the United States, many of the trends and design lessons reported here are drawn from and are applicable to challenges launched by philanthropic and private organizations. In that spirit, we highlight examples from these sectors to illustrate how designers make strategic choices, and how those choices represent leading practices in this growing field. In the course of our research, we also evaluated challenge-related documents and interviewed prize experts from outside the United States. Our findings encompass these insights as well.

We intend the guidance featured in this report for all leaders interested in prize design, from neophytes to those who have already integrated prizes into their problem solving strategies. To appeal to this broad audience, the main body of this report will give an overview of prizes, their design, and the outcomes that they can achieve. Appendix A will provide additional guidance for more advanced designers.

Recognizing the diversity of experiences of prize designers, the report features decision-oriented frameworks that organize the now vast array of complex and distributed prize information. We deliberately created these frameworks to support iterative prize design, because many other excellent reports, such as Harvard Berkman Center’s Public-private partnerships for organizing and executing prize-based competitions or Nesta’s Challenge prizes: A practice guide, already take process-oriented approaches.7

While we have drawn useful information from recent academic literature, articles, and published commentary, much of our data is derived from in-depth interviews with experienced prize designers in government and philanthropy, as well as a proprietary database of over 400 challenges from Challenge.gov and select philanthropic, state, local, and international competitions. The result is a rich compendium of practical guidance for prize designers in the US and around the world.

Public sector challenges by the numbers

These data summarize and characterize mainly public- and philanthropic-sector prize activity based on the analysis of 314 challenges found on Challenge.gov and validated through a secondary dataset of 89 philanthropic, state, local, and international challenges. In coding this data, we found that individual challenges often sought to achieve simultaneously more than one of the six outcomes discussed in this report. Our analysis of challenges by outcome illustrates how prize designers are prioritizing the elements of prize design to achieve certain results. Additional information on our data analysis methodology can be found in Appendix B.

Getting started with incentive prizes

Public, private, and philanthropic leaders are wrestling with technological, economic, environmental, and societal problems that seem to get more complex each day. Public leaders, moreover, must consider these multifaceted problems with limited resources that often prevent them from developing innovative solutions quickly and effectively. This is why government leaders in particular are turning to incentive prizes to advance their missions through incentives and the ingenuity of the crowd.

Leaders who use prizes effectively take a strategic approach. They work with colleagues, partners, and subject-matter experts to carefully select and define problems likely to be solvable through prizes. They collaborate with stakeholders inside and outside their organizations to determine the outcomes they wish to achieve—and then use those decisions to drive a prize design process that will yield specific outputs. Because public organizations must adhere to specific legal requirements, government leaders determine what legal authority will allow them to achieve their desired outcomes. These leaders publicize the challenge, its requirements, and its results in language that will resonate with the audiences they seek to engage. Finally, to realize the full benefits of the prize, leaders initiate legacy activities to provide resources and support to the prize participants who remain engaged after the challenge comes to a conclusion.

Problem definition

Because problem definition involves grappling with a great deal of ambiguity, it is arguably the most difficult part of prize design. It sounds deceptively simple: What problem should the challenge address? Answering that question, however, requires clarity about the outcomes sought and the ways to achieve them as well as a specific problem statement that succinctly describes the fundamental difficulty to be overcome. Designers often initiate these definitional discussions with a diversity of internal and external experts and stakeholders, because they can bring valuable perspectives and ultimately need to be aligned around the final problem statement.

To manage the ambiguity of problem definition, designers often start by developing a clear understanding of the outcomes they seek and the different ways they can achieve them. Because prize design varies, sometimes dramatically, depending upon the outcomes selected, careful definition of these outcomes is critical. These early-stage problem definition discussions help to establish the causal and logical linkages between the specific difficulty to be addressed and the outcomes selected. They help to surface the kinds of challenges (for example, incentive prize, grant, investment) that are best suited to address the problem. These discussions reveal ways in which the designer’s organization may or may not have the legal authorities, resources, skills, and capabilities to address certain facets of its own problems. Finally, by refining their understanding of the outcomes sought and ways to achieve them, designers can explore whether a prize is likely to produce results more effectively than other possible approaches.

Outcome specification establishes the broad set of aspirations, whereas problem statement definition more narrowly frames the need that the prize will ultimately address. Developing a problem statement helps designers craft a need that is not too hard (because no one will win the prize) and not too easy (because the prize will be won too quickly and not necessarily with the optimum solution). Prizes need a problem statement that will be attractive to a broad selection of potential competitors (because greater diversity can lead to more innovative solutions), but not too broad (because an overly broad net can erode submission quality). And, the problem statement must describe a challenge whose scope is appropriate for the types of participants sought: A problem that requires years of work to solve or specialized facilities or high capital expenditure may not fit well with certain target participant groups.

Making these decisions often requires tapping into different types of expertise and devoting a considerable amount of staff time, depending upon the complexity of the problem. Technical experts can be valuable for grappling with the science and technology underlying the problem. Academics and industry representatives can be highly useful for evaluating the time, expertise, and expense needed to solve certain kinds of problems. Designers and strategic thinkers can help refine and reframe problems in ways that are conducive to prize-based solutions. Finally, a gifted facilitator can help to ensure that these different types of professionals have the right conversations and make progress toward a workable problem statement.

All manner of problems may be amenable to prize-based solutions, if defined properly. Consider, for example, the range of problems defined by USAID for its Tech Challenge for Atrocity Prevention. For one of the five components of this challenge, USAID defined the “problem” as third parties who enable or contribute to genocide, consciously or inadvertently. To solve this problem, they sought technologies and innovations that “identify, spotlight, and deter” these enablers. For another component of the prize, USAID identified the unpredictability of genocide as the problem. This led the agency to seek algorithms that could forecast potential hot spots based on socio-political indicators and historical trends.8

According to Jason Crusan, who directs the National Aeronautics and Space Administration’s (NASA) Center of Excellence for Collaborative Innovation (CoECI), problem definition entails “hav[ing] to deconstruct the problem into bite-sized pieces, and abstract[ing] [each] to understand how it’s just one piece of the larger puzzle.”9 Indeed, it can take up to a year to wrestle with problem definition.10 During this time, designers typically conduct a detailed landscape analysis, meeting with internal and external experts as well as partners to define and digest the scope of knowledge applicable to the problem and its surrounding issues. As designers begin to prioritize specific areas of the problem for research, they can also begin evaluating what combination of potential solutions may best achieve their desired outcomes.

An example from CoECI emphasizes this point. Every year, fraud, waste, and abuse in the health care industry accounts for hundreds of billions of dollars in losses.11 The US Centers for Medicare and Medicaid Services (CMS) wanted to apply new tools to their ongoing efforts to address this challenge. The agency partnered with CoECI, the state of Minnesota, Harvard Business School, and TopCoder to find a more efficient and effective way to help states spot medicaid fraud.

Given the challenges associated with identifying fraud, the partners took time to define the problem, which focused on how current software systems could not effectively screen risk scoring, validate credentials, authenticate identities, or sanction checks. To tackle this problem, they launched the Provider Screening Innovator Challenge, which sought screening software that could help ensure that medicaid funds are not spent fraudulently.12 To make sure the overarching challenge would generate a workable solution, the design team broke it into four components and 124 separate challenges. As a result, the partners were able to obtain an ecosystem of solutions based on submissions from more than 1,600 participants from 39 countries. The software applications developed as a result of the challenge series are being compiled into an open-source solution for the state of Minnesota—and perhaps the nation.13

Push versus pull: Is a prize appropriate?

Problem definition discussions inevitably raise important questions about which approach—a challenge, a prize or some other mechanism—can generate the best solutions. Experienced prize designers have learned that incentive prizes are not appropriate for every type of problem and are not a silver bullet even for the right problems. One valuable way to navigate this strategic choice is to consider the distinction between “push” and “pull” mechanisms, a reference to how different types of rewards, placed at different points in a solution development process, can create unique incentives.

Push mechanisms include traditional grants and contracts, such as fixed price or time and materials contracts or research and development grants. These provide vendors or grant recipients with payments or incentives to develop and deliver specific services or technologies, in effect paying for the effort involved, but leaving the risk that the product may not meet expectations. Push mechanisms can be used to generate a range of outputs, from purchasing services or technologies that are well-understood to supporting early-stage research and development efforts that have uncertain outputs.

include traditional grants and contracts, such as fixed price or time and materials contracts or research and development grants. These provide vendors or grant recipients with payments or incentives to develop and deliver specific services or technologies, in effect paying for the involved, but leaving the risk that the product may not meet expectations. Push mechanisms can be used to generate a range of outputs, from purchasing services or technologies that are well-understood to supporting early-stage research and development efforts that have uncertain outputs. Pull mechanisms, including incentive prizes, reward participants not for their efforts per se, but for their outputs, such as ideas, prototypes, pilots, or commercial products and services. Leaders use pull mechanisms to encourage participants to experiment with innovative and, sometimes, risky approaches, while paying only for results that meet predetermined rules or specifications. For some pull mechanism prizes, if no one wins, the sponsoring organization is responsible for only its administrative costs.

When should you not use a prize?

Prizes cannot solve every type of problem. Here are a few considerations:

Prizes should not be used when there is a clear, established, effective approach to solve a problem.

A prize’s strength comes from its ability to incent participants to create novel solutions. Using a prize to create solutions already available in mature markets may simply waste participants’ efforts.14

Prizes should not be used when potential participants are unwilling or unable to dedicate time and resources to solve the problem.

For instance, as appealing as start-up companies may be as prize participants, they are rarely able to shift their commercial focus to a challenge. Prize designers need to understand the risk tolerance and capabilities of their potential participants before committing to the use of a prize that requires their engagement to be successful.

Prizes should not be used when there are only a limited number of participants who can address the problem.

If the universe of participants is small and known, then a prize may not be necessary. Instead, leaders should use other types of challenges, such as “pay for performance” approaches that issue grants or contracts with milestone-based payment terms. One example of this approach is NASA’s Commercial Orbital Transportation Services program in which industry agreements with certain companies provide for fixed-price payments only when performance milestones are met.15

Experienced designers often combine prizes with push mechanisms to achieve their goals more quickly and effectively. For example, in 2013 the Army Research Laboratory ran five prizes that successfully identified new methods for generating energy from a walking hiker and new ways to produce potable water for humanitarian missions. The winning solutions came from individuals from around the globe—many of whom would have not had the opportunity to work with the army through other means. The Army Research Laboratory plans to continue developing these ideas through traditional push mechanisms such as testing at laboratory facilities and future small business funding opportunities.16

Despite the fact that extensive consideration may be required to determine the suitability of a prize, this preparatory requirement has not put a damper on experimentation in the past five years. Many agencies, such as NASA, embrace prizes and translate their growing confidence and experience into policies that codify and explain their problem-solving strategies.17 The White House Office of Science and Technology Policy provides annual progress reports on prize competitions offered by federal agencies, and the Office of Management and Budget offers detailed legal guidance to prize designers. This work can be immensely helpful for less experienced organizations considering similar approaches.

Most experienced designers consider prizes to be just one important problem-solving approach in a larger portfolio that includes challenges and other, traditional approaches as well. In some cases, for example, NASA program managers have folded challenge outputs into grants or in-house R&D efforts. In other cases, the agency uses traditional contract arrangements to implement designs solicited from prizes. NASA’s designers view push and pull mechanisms not in isolation, but in varying combinations custom-designed to achieve their desired outcomes.18

Evaluating legal options

Public sector leaders can’t simply design and execute a prize without first evaluating their legal authority to do so, particularly when it involves paying cash to winners. US government prize designers, in particular, must look carefully at the legal constraints they face. Typically, this involves early liaison with general counsel to avoid unwelcome surprises.

For federal agencies, several laws can affect incentive prizes. The most well-known is the America COMPETES Reauthorization Act of 2010, which provided broad authority for every federal agency to conduct prizes in the service of their missions. America COMPETES created a clear, simple legal path for using these tools and complemented other pre-existing agency-specific prize authorities.19 One key aspect of the prize authority provided by America COMPETES is that federal agencies are able to co-fund prizes (both the prize purse and administration costs) with other agencies as well as private sector and philanthropic organizations.20

In 2010, the Office of Management and Budget issued guidance on various legal authorities and provisions, intellectual property considerations, and other issues affecting prizes in a memorandum called “Guidance on the use of challenges and prizes to promote open government.” This memorandum provides prize designers and their legal counsel with a useful starting point for developing their own legal strategies.21

Building a legal strategy applies to state and city prizes as well, because legal requirements must be considered in light of desired outcomes. For example, designers of the New York City Big Apps Challenge intended to spur the development of tech businesses and therefore opted to let participants retain the intellectual property rights of the apps they created.22

The conclusion of a prize also poses legal considerations that should be addressed early in the design phase. Perceptions of faulty evaluation criteria or unfair judging procedures can lead participants to take legal action, especially if the stakes are high. Committing to the transparency of the judging process and ensuring that participants can view scoring and selection criteria when they register for the prize can ameliorate such issues.

In the federal context, the Government Accountability Office recently ruled that it did not possess the legal authority to adjudicate a dispute related to a prize offered by the Federal Trade Commission, despite its well-established ability to do so for contracts.23 This ruling raises important questions about how the federal government will handle prize-related conflicts in the future.24 It also underscores how important it is for prize designers to build prizes that are highly transparent, with independent judging panels and, for worst-case scenarios, conflict resolution processes.

After reviewing these considerations and engaging in an iterative problem definition process, designers will be ready to begin building a prize.

Linking prize outcomes to prize design

Prize design elements: Definitions

Designing a successful prize can be a daunting task. No one formula is adequate because each prize addresses a unique problem and set of potential participants whose incentives must be carefully understood.

Many public organizations do not possess all of the skills and capabilities needed to design an effective prize, such as online platform development or marketing expertise. In some cases, the necessary abilities involve distinct and highly specific insights into market dynamics or participant incentives. And in almost all cases, designers need help with problem definition, because a poorly defined problem statement can make it extremely difficult to achieve the desired outcomes.

Despite the unique nature of each problem, designers can rely on certain common elements. These can be thought of as ingredients, combinable in various ways to design prizes that generate specific outcomes. All of the elements matter, together forming an integrated and often complex set of strategic choices. How designers assemble and use them is at the heart of prize design.

There are many ways, for example, to craft a communications strategy to draw the attention of potential participants to a prize. But who should develop the communications campaign and its messaging? What channels should be used? How much time and money can be spent on the campaign? How can we measure its success? These are just some of the questions that designers must answer.

The strategic choices involved in challenge design can be grouped into five core design elements:

Resources : Funding, labor, open datasets, online platforms, testing protocols, facilities, and partnerships—the infrastructure of the prize

: Funding, labor, open datasets, online platforms, testing protocols, facilities, and partnerships—the infrastructure of the prize Evaluation : Selection criteria, judging protocols, and winner selection as well as measurement of the prize’s impact and long-term legacy

: Selection criteria, judging protocols, and winner selection as well as measurement of the prize’s impact and long-term legacy Motivators : Cash purse and other non-monetary incentives that can attract and reward participation, such as mentorship, collaborative opportunities, public recognition, validated performance data, and exposure to experts and luminary judging panels

: Cash purse and other non-monetary incentives that can attract and reward participation, such as mentorship, collaborative opportunities, public recognition, validated performance data, and exposure to experts and luminary judging panels Structure : Rules that shape the prize’s operations, classes for different types of participants, eligibility requirements, intellectual property requirements, timeline, stages, and other parameters

: Rules that shape the prize’s operations, classes for different types of participants, eligibility requirements, intellectual property requirements, timeline, stages, and other parameters Communications: Marketing and stakeholder management methods used to reach potential participants and partners and to raise awareness of the goals, progress, outputs, and outcomes of the prize

Designers consider these elements of prize design from the very early stages of problem definition to the period after the prize concludes, when sustaining participants’ energy and focus can significantly help to achieve outcomes. Below we discuss these elements and feature examples of how designers use them to create, implement, and ensure the legacy of their prizes.

Resources—You don’t have to do it alone

There are four critical resource phases: design, implementation, award, and post-prize “legacy” activities. Depending on the desired outcome, these phases can be quite variable in terms of length, cost, and demand on resources. They can involve a few or many small contracts for vendor services as well as different types of partnerships. Most importantly, as each of these phases unfolds, designers learn a wealth of new information about what successful execution will require, with inevitable impacts on resource requirements and timing.

One major resource requirement, of course, is funding for the purse.25 Since the purse is often relatively small, it can be tempting to view prizes as less-expensive alternatives to more traditional grants and contracts. Even if no one wins the prize, however, its administration costs can be substantial, particularly if the goal is to achieve outcomes that could require significant commitments to marketing, mentorship, and networking. LAUNCH, for example, a global challenge led by NASA, USAID, the Department of State, and NIKE Inc., is intended to identify and support innovative work contributing to a sustainable future. The initiative focuses on spurring collaboration among innovators; it offers no monetary incentives, but instead invests its resources in helping participants develop and scale their solutions.26

Furthermore, prize administration involves significant costs that fall into different categories including, but not limited to, labor, technology platform, marketing, events, travel, and testing facilities.27 It requires a diverse set of abilities and experiences, obtained in-house or through in-kind support from partners and paid vendors. Each designer must define the right mix of in-house and external support by first assessing the organization’s abilities.

Labor costs are involved in developing prize rules, advertising the prize, connecting with participants, administering interactions among stakeholders, judging entries, and evaluating the success of the prize after award. These activities will require a diverse team, with subject-matter experts to develop, advertise, and judge the prize, and experienced administrators to run it.28 Effective designers should consider the labor resources required for each phase of the prize, such as estimating the number of potential submissions to ensure the availability of a sufficient number of judges. Bloomberg Philanthropies’ Mayors Challenge, for instance, assessed how many submissions it might receive by sending RSVP cards to potential competitors.29 The challenge also planned for and included labor costs that extended beyond award to establish a lasting legacy for the prize. For example, post-award coaching, technical assistance, and networking were provided in order to continue to spur action following the award.

The technology platform used to facilitate certain prizes also represents a major cost, as well as a critical component for success. Such online platforms can help target the right audiences, enforce rules, and standardize submissions. NASA’s Mapping Dark Matter challenge, for instance, sought an algorithm for mapping dark matter, an elusive task that has stumped astronomers for years. NASA partnered with the online challenge platform Kaggle, using its leaderboard feature to offer an environment allowing data scientists and mathematicians to collaborate and compete. Kaggle’s platform enabled the creation of a specialized community that ultimately included 73 teams. Within 10 days, a doctoral candidate in glaciology from Cambridge University had built an algorithm that outperformed NASA’s existing one.30When considering different platforms, designers can evaluate a few key cost elements such as platform access fees and design consulting.31 Appendix D offers more information on online challenge platform vendors.

Additionally, certain administrative costs may be directed toward activities to improve or strengthen submissions, including standard, accessible data, consulting/coaching support, and testing facilities. For example, a number of US government agencies have provided easy access to data and data standards for developers to improve entries in apps challenges such as DOE’s Apps for Energy and Apps for Vehicles challenges.32 This support structure was provided more directly in the Progressive Insurance Automotive X PRIZE where semi-finalist teams were given vouchers for consulting services from private consulting firms and national laboratories in order to allow participating teams to improve their designs.33 Testing facilities are also resources that many participants will not have access to when developing their prototypes; the provision of these places will help to improve and iterate participant designs in a laboratory setting. For example, the US government has been a key source for providing these facilities. In the Wendy Schmidt Oil Cleanup X Challenge, a Department of Interior testing facility was used to host physical and laboratory testing of finalist prototype designs for high-performing oil cleanup equipment, and in the Progressive Insurance Automotive X PRIZE, the Argonne National Laboratory provided dynamometer testing of the super-efficient finalist vehicles.34

Because prizes are still relatively novel, designers must often commit resources to mobilize their own organizations. Most champions are senior executives, but they can be other employees who have the networks and political capital needed to generate momentum. Champions can clear away significant internal barriers by clearly communicating to employees how solutions derived from the prize will supplement and support those developed within the organization.

Finally, designers should expend resources to find partners that can help fund prizes and play various strategic roles in execution. Many designers carefully assess their own internal capabilities to understand the kind of partner support they may need. As categorized by Raymond Tong and Karim Lakhani, partners can play a variety of roles across a spectrum: a “host” who develops and oversees the prize, a “coordinator” who solicits others to develop operational components, or a “contributor” who assists the hosts wit these tasks.35 For example:

Host —Ashoka Changemakers has teamed up with the LEGO Foundation to seek educational innovations through the Re-imagine Learning Challenge. The challenge is hosted on the Ashoka Changemaker website and uses its infrastructure. The three-year partnership includes a pledge of more than $200,000 from LEGO Foundation to support the challenge. 36

—Ashoka Changemakers has teamed up with the LEGO Foundation to seek educational innovations through the Re-imagine Learning Challenge. The challenge is hosted on the Ashoka Changemaker website and uses its infrastructure. The three-year partnership includes a pledge of more than $200,000 from LEGO Foundation to support the challenge. Coordinator —Humanity United convened the Partnership for Freedom, with sponsors including the White House, the Department of Justice, the Department of Health and Human Services, and the Department of Housing and Urban Development. This partnership launched Reimagine: Opportunity, the first of three challenges designed to improve the support infrastructure for survivors of modern-day slavery, resulting in more than 160 applications and 12 highly innovative solutions. 37

—Humanity United convened the Partnership for Freedom, with sponsors including the White House, the Department of Justice, the Department of Health and Human Services, and the Department of Housing and Urban Development. This partnership launched Reimagine: Opportunity, the first of three challenges designed to improve the support infrastructure for survivors of modern-day slavery, resulting in more than 160 applications and 12 highly innovative solutions. Contributor—The UN Development Program provided funding and guidance to Nesta for a challenge focused on developing sustainable, cost-effective, off-grid renewable energy supplies in rural Bosnia and Herzegovina.38

Many designers believe that partners from the private, public, and philanthropic sectors can help unleash the full potential of prizes.39 For example, the Hurricane Sandy Task Force launched Rebuild by Design, a multi-stage challenge to create designs that increase the resiliency of those regions affected by Hurricane Sandy. The challenge administration involved a mixture of partners from federal (Department of Housing and Urban Development, National Endowment of the Arts), academic (New York University Institute for Public Knowledge), and non-profit (Regional Plan Association, Municipal Art Society of New York, and Van Alen Institute) organizations. The $2,000,000 purse was funded entirely by the Department of Housing and Urban Development’s philanthropic partners, led by the Rockefeller Foundation. Through this integration of partners, the challenge resulted in the participation of 148 teams from more than 15 countries. Ten finalists received $200,000 and met with community leaders and stakeholder groups to receive feedback and compete for the opportunity to implement their designs.40

When selecting partners, designers often consider a number of factors, including what control may be ceded to partners in prize administration, and how their brands and support can help the prize succeed.

Evaluation—Building a road map, checking progress, determining impact

Evaluation includes a broad set of assessment and measurement activities that occur during every stage of a prize. It involves the initial determination of whether it is likely to be effective and appropriate, assessment of the quality of implementation processes, development of the criteria and mechanisms used to select winners (including providing feedback to participants during and after the prize), and evaluation of impact and overall value. Proper evaluation is critical because it can affect whether participants view the prize as fair, shape the validity of the results, and, thus, ultimately determine its success. Effective evaluation is also an essential input to strong prize management, both to improve implementation processes and to inform decisions about whether to use a prize again.

In the early stages of design, there are two useful evaluative techniques. The first, sometimes called “theory of change,” involves identifying how the prize, through its structure, rules, and activities will incent participants to engage in the behaviors that will help solve the defined problem. For example, a monetary reward may prove to be a stronger incentive for some participants than the opportunity for professional networking or coaching. This is also a good time to determine how prize-generated incentives may be influenced by the external environment (that is, incentives from other domains, such as the market) and other interventions, such as previously existing challenges seeking similar outcomes.

Second, using research and logical analysis, it is important to check whether the planned challenge activities and outputs are likely to achieve the desired outcomes. This evaluative technique includes identifying other factors that would be likely to help or hinder the achievement of these outcomes. The major benefit of this early assessment is that the design can still be changed to address these factors, including adding activities to reduce risks or reinforce positive outputs, such as adding additional elements of a broad program that supports scaling up once the prize has identified winners. To properly evaluate the prize, designers should develop indicators consistent with their theory of change for the prize’s activities, milestones, outputs, and outcomes.

The quality of the implementation processes should be evaluated during and after the prize to determine whether discrete activities were actually successful. For example, some designers undertake special efforts to identify participants with particular characteristics. In some cases, this recruitment involves finding participants with specific technical expertise; in others, the goal may be to engage new and diverse individuals and organizations in the problem-solving space. In all cases, capturing good information about these processes during implementation can guide efforts to iteratively improve engagement activities for the current prize and provide insight into more effective engagement efforts for future prizes. Similarly, evaluation should include looking for patterns of who initially engages but then drops out or fails to continue through several rounds. It may be that the prize needs to be redesigned to provide additional support or that the current process is effectively winnowing out those who are unlikely to provide useful ideas or results.

A unique element of evaluation in prizes is defining the criteria used to select winner(s). In creating these criteria, designers are shaping how participants will work, preventing unintentional and undesirable outcomes and curbing potential fraud. Appropriate selection criteria are grounded in and consistent with the overarching view of how the prize will generate change or solve a problem. Because the wrong criteria could lead participants to submit solutions that do not actually address the fundamental problem, designers often review their selection criteria repeatedly, working with internal and external stakeholders to anticipate and account for all possible responses.

One helpful practice for designers to follow is to open up draft rules for a period of public comment, as was done by USAID recently for its potential challenge for desalination technologies, by the Department of Energy for its potential challenge on home hydrogen refueling technologies, and by NASA for its various Centennial Challenges.41

Designers should also carefully consider whether to use quantitative or qualitative criteria, or a mix of both. The Department of Defense’s HADR challenge, which seeks a kit for use in humanitarian assistance and disaster relief, sets specific quantitative criteria for acceptable solutions—weight of less than 500 pounds, constant one-kilowatt power production, production of 1000 gallons of water per day, and so on.42

When quantitative criteria are not applicable or relevant, clear parameters and appropriate evaluation arrangements become even more critical. In the case of the Prize for Community College Excellence, the Aspen Institute needed to find a way to evaluate qualitative data about US community college performance. To make this process as rigorous and independent as possible, the institute employs a third-party evaluator that specializes in evaluation criteria framework design and in collecting and analyzing such data to ensure a strong basis for evaluation.43

To ensure validity and objectivity in the evaluation process, designers should determine who will judge submissions. Expert judging can be effective when the desired solution is highly technical, while crowdsourced voting is valuable when the goal is to engage public participation.44 Some organizations have begun to examine how crowdsourced selection can lead to viable solutions. For example, DARPA’s Experimental Crowd-Derived Combat-Support Vehicle Design Challenge solicited vehicle concepts from the public for different missions. The challenge also sought to examine the question, “How could crowdsourced selection contribute to the goals of defense manufacturing?”45 While crowdsourcing the evaluation of winners can work and, at the same time, draw publicity, expert judging provides two distinct benefits. Judges with particular domain expertise can lend credibility to the challenge results and can improve submission quality through formal and informal feedback, if it is built into the prize structure.

One of the important elements of high-quality evaluation is to revisit the criteria at the end of the prize and assess whether they were appropriate: Did they lead to the selection of the best winning solution(s)? If the winner did not perform well, and some unsuccessful participants seemed stronger, it might be that the criteria were not right or were not operationalized correctly. For example, if simple weighting is used to derive an overall score, a proposal which scores badly on one criterion and well on another might end up the winner overall, even though it was inadequate in a vital area.

Another major component of evaluation is measuring prize impact. Designers should develop measurable indicators of success before launching the prize. Without these indicators and corresponding impact evaluation approaches, the prize may conclude without producing a clear understanding of whether it achieved or at least advanced the organization’s goals, which can be disheartening to participants and designers alike. Thus “evaluability” should be an explicit objective of prize design.

Developing measures of success during the design phase can be helpful in several respects. It reinforces discipline in the design team to ensure that design elements link to desired outcomes. It shows skeptical stakeholders that the prize’s effectiveness can be gauged objectively. And it assists the organization in assessing its overall return on investment. In anticipation of end-of-prize impact evaluation, measures of success can be deployed for intermediate outcomes, such as milestones for building prototypes or website page impressions for raising awareness. In addition, designers can evaluate other important intermediate outcomes, such as strengthening the community of participants, improving their skills and knowledge, and mobilizing capital on their behalf.

Because measures of success can be both quantitative and qualitative, effective evaluation will typically include systems to gather both kinds of data systematically and also capture unexpected data, such as wider impacts of the prize process. Common approaches include:

Measuring funds leveraged . The MIT Clean Energy Prize, for example, distributed $1,000,000 to its winning teams. The teams were asked to develop business plans for the prize and submissions generated $85,000,000 in capital and research grants. 46

. The MIT Clean Energy Prize, for example, distributed $1,000,000 to its winning teams. The teams were asked to develop business plans for the prize and submissions generated $85,000,000 in capital and research grants. Comparing outcomes with alternatives . The Talent Dividend Prize sponsored by CEOs for Cities and the Kresge Foundation, for instance, supports college graduation with a $1,000,000 prize. The designers measured returns by comparing the results with the opportunity cost of four fully funded college scholarships. In this case, the prize produced more than four college graduates and was therefore judged a success. 47

. The Talent Dividend Prize sponsored by CEOs for Cities and the Kresge Foundation, for instance, supports college graduation with a $1,000,000 prize. The designers measured returns by comparing the results with the opportunity cost of four fully funded college scholarships. In this case, the prize produced more than four college graduates and was therefore judged a success. Assessing reach and influence. For certain outcomes, such as raising awareness and mobilizing action, evaluation can involve tracking net new followers and activities undertaken by participants during and after the prize to build on what they produced. The EPA ENERGY STAR National Building Competition, Battle of the Buildings, used the “Biggest Loser”-style competition to save energy and reduce greenhouse gas emissions. To make their results more meaningful and measurable, the EPA asked participants to find creative ways to contextualize how much energy their buildings were saving. Some of these submissions went viral and grabbed the attention of Good Morning America and The New York Times.48

To create these metrics, designers should consider what evaluation indicators and measures can be collected during the prize (that is, media impressions or surveys of competing teams that collect information regarding dollars/hours spent preparing solutions), and what outputs and outcomes should be assessed in the months and years following the challenge (that is, follow-on investment, change in public opinion, market adoption, scale, and behavior change decay rate). The latter measures may require significant investment of time and resources during the “legacy” phase post-award. Designers should also note that getting post-award data from participants may necessitate building reporting requirements into the prize rules to enforce compliance or allow access.

The use of objective, third-party data such as government statistics can increase the credibility of the prize evaluation process, but in almost all cases it is necessary for designers to obtain new data. The Aspen Prize for Community College Excellence, for instance, first worked with a data/metrics advisory panel to develop a model for selecting the top 120 US schools. The institute then asked the eligible institutions to submit applications featuring data about how they were advancing student learning. Working in tandem with the data/metrics advisory panel, the institute organized and analyzed these data to determine winners.49

There should be an overall evaluation of whether the prize was worth it. This is not a simple matter of comparing the direct cost of running the prize to the value of the solution produced. In some cases, a prize might have been unnecessary, and the solution would have come about through other means. In other cases, the wider impact on participants who don’t win, including those who go on to develop new innovations because of what they learned during the prize, will be significant.

Measuring changes should not only be limited to positive impacts. Particularly for government agencies, there should be follow-up to explore whether there have been unintended negative impacts of the prize implementation. Return on investment calculations often leave out the wider costs incurred by other parties in the process. An overall “value for effort” calculation, taking into account positive and negative impacts on winners and losers as well as resources used by other parties, provides a more reliable and comprehensive view of the merit, worth, and value of a prize. In particular, such an analysis would be helpful in checking for wider potentially negative impacts—such as organizations becoming less inclined to participate in prizes because of the low return on their investment.

In addition to measuring the changes that have occurred, there should be some investigation of the extent to which change can be attributed to the prize. Experimental and quasi-experimental designs, involving a control group or comparison group of participants may be feasible in some circumstances, but they are unlikely to be cost effective or ethically acceptable given the human subjects that need to be involved. Instead, rigorous non-experimental approaches to causal attribution and contribution are useful to identify possible alternative explanations for the impacts, and whether they can be ruled out.

These various approaches to evaluation need more than a few simple metrics to track. Designers need to think carefully about what they are trying to assess, when and how, so that they can surface the most helpful insights for their current and future prizes. Designers sometimes create independent teams to assess the success of their work, as illustrated by the Rockefeller Foundation, which uses an evaluation group to study the impact of its innovation projects.50

Motivators—You get what you incent

Motivators spur participation and competition. These incentives should encourage the right participants in the right ways to do the work required by the prize. Successful designers use motivators to increase the participants’ return on their investments of time, effort, and resources.

The prize award itself is, of course, the most visible motivator, encouraging participation and channeling competitive behavior toward the desired outputs and outcomes. Historically, awards have included cash purses, public recognition, travel, capacity building (that is, structured feedback and skills development), networking opportunities (that is, trips to conferences), and commercial benefits (that is, investment and advance market commitments). Public sector challenges often feature diverse awards. At one end of the spectrum is the Department of Energy’s L-Prize, which offers a $10,000,000 cash award and an advance market commitment to those who develop the next-generation light bulb. At the other end is the Department of Health and Human Service’s Apps Against Abuse, which targets domestic violence and motivates participants with an award solely of a public winner announcement by government leaders.51

The size and type of award provides designers with important signaling effects and leverage opportunities. Designers typically try to ensure that the purse is commensurate with the magnitude of the problem, the types of participants required, the amount of time likely to be involved in reaching a solution, and the amount of media and public attention desired. Qualified participants are unlikely to compete if the prize offers a small purse but requires a year or more of effort on a hard problem. For prizes that require commercial participants, such as established companies or startups, the purse must be economically interesting in the sense that it could defray research and development costs, pay for certain types of risks and opportunity costs, or provide something companies can highlight for branding purposes, such as third-party validated performance data or a “badge” marking the company’s submission as successful in the prize. Large purses are also more likely to encourage the formation of new teams including both technicians, experts from relevant disciplines, and investors. For prizes seeking outcomes such as development of prototypes, pilots, or market stimulation, this element of design is critical because it helps designers attract outside capital.

Mentorship also can be a motivator and is used increasingly in prize design. Designers can incorporate mentorship in the prize structure, providing participants with access to experts, tools, leading practices, and other resources to accelerate the development of high-quality solutions and support the formation of communities of interest around the problem.52 Participants do not need to win to benefit from this experience.

Some designers pair winners with industry leaders to drive post-award momentum. The Apps 4 Africa challenge, established by Appfrica (one of Africa’s oldest acceleration programs), provides winning African technology entrepreneurs with mentors who help them with business development and product design. This mentorship has helped 11 new companies raise an average of more than $90,000 each in follow-on funding.53

Many designers are developing collaborative environments, enhancing knowledge sharing among participants by developing rules and evaluation criteria that encourage them to work together. Some intentionally develop opportunities for traditional participants to collaborate in problem solving, using virtual and in-person team summits and participant “bootcamps.”54

But collaboration in prizes is not always useful. Intentional matchmaking among participants can be tempting, but it can also lead participants or observers to think the prize is fixed or that its administrators are interfering too much in the prize’s outcomes. Furthermore, while collaboration may be appropriate for achieving certain outcomes, fierce competition can also be useful, particularly for shortening product development timelines. Designers should carefully evaluate this trade-off between collaborative and competitive motivations when thinking about the best path to a particular outcome. For example, if seeking a new prototype, the intensity of competition may need to be high to accelerate prototype performance on an aggressive timescale. If, however, the designer is seeking increased engagement among a population, then more collaboration may inspire others to begin participating in the prize.

Finally, for certain outcomes, intellectual property rights can serve as a powerful motivator. The prize sponsors’ degree of ownership over submissions is a key design consideration. Do they want to use the solution in a proprietary manner, require that solutions be made available to the public through an open source license, or just to have access to it in the marketplace? The options range from full retention of rights by participants to full retention of rights by the organization running the prize. One important consideration for US government leaders interested in stimulating innovation is how the America COMPETES authority protects participants’ intellectual property.55 Regardless of where the prize falls on this spectrum, clear, upfront terms of ownership are critical. The rules for the US Air Force’s Fuel Scrubber challenge, for example, clearly stated that winners will retain their intellectual property rights, signaling in advance that challenge participants can commercialize their winning solution and profit from it in the market.56

Structure—Your boundaries set the frame

Structure, or prize architecture, is the set of constraints that determines the scale and scope of the prize, as well as who competes, how they compete, and what they need to do to win. A competition period that lasts too long risks losing participant interest and one that ends too quickly may not give participants enough time to develop solutions. Winner-takes-all prizes can discourage participants with low risk tolerance. Those with well-defined phases and milestones can modulate competition, winnow participants at different stages, and reward only the most innovative solutions. Due to such considerations, successful designers devote significant time and effort to prize architecture.

Eligibility requirements shape the population of participants. Which participants should designers target—individuals, teams, organizations, established institutions, or even political entities such as cities or states? The choice involves at least two considerations. First, given the desired outcome, who is best positioned to solve the problem? Who has the right skills, resources, and interests? Second, if the desired outcome includes a form of engagement extending beyond the immediate pool of potential participants, how can they influence the larger community or stakeholder group? It is worth noting that in the case of challenges sponsored by the US government, participant eligibility is shaped by the authorities under which the challenge is administered.

The Georgetown University Energy Prize, sponsored by the Joyce Foundation, the American Gas Foundation, and the Department of Energy, among other partners, challenges communities “to work together with their local governments and utilities in order to develop and begin implementing plans for innovative, replicable, scalable and continual reductions in the per capita energy consumed from local natural gas and electric utilities.”57 This example provides insight into how designers can structure eligibility requirements to shape team formation and expand the influence of the prize beyond individual citizens.

Successful designers often try to define their prizes in ways that will attract the largest and broadest pool of participants, as the most innovative solutions often come from those without previous exposure to the underlying problem. Even when casting a wide net, however, designers should be careful about eligibility. For some, the quality of submissions is more important than their quantity, or resource constraints may dictate a smaller participant pool, making restricted eligibility the best choice. For others, the variety and sheer quantity of submissions that can be obtained from broad eligibility requirements are more desirable. Narrow eligibility requirements thus may be best for a prize seeking a handful of thoughtful concept papers about a technical solution, while broad requirements could be better for a challenge seeking a new logo design.

If multiple types of participants are desired, designers should consider whether a certain team profile increases the possibility of a successful outcome. Additionally, designers must think about whether different types of participants should compete in one pool or be separated into different categories. For example, the US FIRST Robotics Competition hosts four age-based classes of challenges for students aged 6-18: Junior FIRST LEGO League, FIRST LEGO League, FIRST Tech Challenge, and FIRST Robotics Competition. The FIRST Robotics Competition requires a minimum of 15 high school students and 3–6 professional adult mentors per team.58

Prize length typically consists of two time periods, those for submission development and for judging. The former requires designers to determine the appropriate time likely to be needed to reach a particular outcome. For example, the Case Foundation’s Finding Fearless competition was focused on generating ideas to solve chronic social challenges and gave participants only 20 days to submit their ideas. DARPA’s UAV Forge competition, by contrast, gave participants 152 days to showcase a working prototype of an unmanned aerial vehicle.59 Data on prize length is detailed in the following sections by outcome. Designers should note that the lengthier the prize, the higher the likelihood of administrative staff turnover. It is critical that designers document their rationale and assumptions behind key design decisions and desired outcomes for any potential staff transitions.

Designers often engage with subject-matter experts or potential participants to develop a realistic assessment of the time needed for solution development and the likely number of submissions. This information can also be used to estimate the appropriate number of judges needed to ensure a timely review. The selection of judges with the appropriate technical expertise and availability to commit their time for thorough reviews is critical for outcomes focused on developing prototypes and stimulating markets. Designers should estimate the time required for an individual judge to assess submissions or the time for a panel of judges to reach consensus on the relative merits of prize submissions, and use those estimates to determine the number of part-time judges needed. If the number of part-time judges becomes unwieldy for challenge administrators based on this approach, designers should consider compensating judges to receive full-time evaluation support. Designers also consider various forms of challenge segmentation to encourage certain kinds of behavior. Dividing the challenge into rounds can allow participants to modify and improve their submissions, thus increasing their quality. As an example, the Institute of Justice’s Ultra-High Speed Apps challenge has two phases, the first solely for the generation of the app ideas and the second for actual software development.60

Some designers segment their prize structure by topic, with multiple related sub-challenges taking place concurrently. This can increase the prize’s impact by elevating the importance of certain topics and attracting a broader set of solutions. The EPA’s Campus RainWorks Challenge, for instance, invites students to design an innovative green infrastructure project for their campus, offering two topic areas. One category involves designing a master plan for a broad area of campus; the other seeks designs for a smaller location.61

Designers can also segment prizes by geography, with simultaneous challenges in separate locations (such as state challenges leading to a national final round). The Strong Cities, Strong Communities (SC2) Challenge is a federal interagency initiative seeking innovative ideas to incent economic development. The challenges are customized to the areas they are designed to help: Las Vegas, Nevada; Hartford, Connecticut; and Greensboro, North Carolina.62 Such a strategy can help manage larger-scale challenges and focus attention on site-specific solutions for targeted areas.

Communications—If you build it, they may not come

Communications serve several different strategic goals. They can attract participants, spur them to compete, and maintain their interest afterward. Also, communications keep partners and stakeholders informed about the purpose and progress of the prize, helping to secure their support and, in some cases, funding. For many designers, communications are also a mechanism for achieving certain specific outcomes, such as building market awareness of new capabilities or public enthusiasm for new behaviors that further the public good. Because communications are so important, designers should plan and invest carefully to build the right buzz.

Effective prizes use robust branding plans to build recognition and credibility among the participant and stakeholder communities. This can be achieved through press releases, social media, and targeted invitations, using the organization’s and partners’ networks where appropriate. During Bloomberg Philanthropies’ Mayors Challenge, for instance, challenge administrators sent personalized invitations to eligible cities outlining the challenge’s importance.63 Establishing a clear and powerful brand is critical to the post-award legacy of the challenge and will significantly impact the challenge’s sponsors’ ability to attract public attention and the desired participants to future rounds. Many broadly recognized challenges dedicated significant time and resources to building a lasting brand including but not limited to the Mayors Challenge, XPRIZE, and the NASA Centennial Challenge.

To build credibility, designers should clearly publicize rules and evaluation criteria and regularly update participants and stakeholders on the process. To facilitate these communications, external partners can provide expert advice and support. For example, Nesta has partnered with the UK Department for Business, Innovation and Skills and made use of their combined networks to market its Open Data Challenge Series to potential participants.64

Strong communications help designers to manage relationships with participants and partners during prize implementation. It’s useful to create regular check-ins with participants and provide them with effective communication channels to discuss any issues that may arise. Check-ins also provide participants with feedback that can lead to more effective solutions. For example, the Department of Energy’s National Geothermal Student Competition featured two phases. The first 30-day phase required an initial concept paper. Teams chosen for advancement were then required to participate in three biweekly review meetings and submit regular reports documenting their progress over the course of the challenge to ensure they were progressing toward a final product.65

Designers attempting to build communities or markets typically establish post-award messaging capabilities. This may involve periodic post-award webinars; publications summarizing lessons learned, data captured, and aggregate outputs from the prize; “road shows” to visit relevant conferences, agencies, legislators, and other stakeholders; and reunion conferences that encourage participants to discuss their progress or even online collaborative spaces. For example, the International Space App Challenge was a two-day “hackathon” that included 9,000 people who met at 83 locations as well as 8,300 remote participants. Together, they worked on 50 different NASA challenge topics and developed 770 solutions in the course of one weekend. After the global awards, local leads from each location facilitated the creation of Google Groups to serve as a medium for ongoing communication and idea sharing between the participants.66

Prize design outcomes

In the last five years, public sector prize design has become increasingly diverse and sophisticated, with a shift in focus from prize types to outcomes. In the past, the selection and use of a prize type, such as a “point solution” prize for new technology, reflected a somewhat rigid belief that prize types and outcomes should match exactly. As designers have become more comfortable and flexible in crafting prizes, they are finding that it is better to begin with the outcomes they want to achieve and then assemble the right mix of design elements to achieve them.

In this section, we examine the six key outcomes designers most often pursue as well as the prize design elements that are critical for achieving these outcomes. While designers should recognize that prizes usually require all five of the elements of design introduced above, we highlight those elements that are most important to get right to ensure that the prize achieves its intended outcome. We also know that many prizes seek and achieve multiple outcomes. Consider the MIT Clean Energy Prize, which distributed $1,000,000 to its winning teams. While the prize explicitly solicited business plans, it has also stimulated the market by generating $85,000,000 in capital and research grants.67 Many advanced designers attempt to use prizes both to develop markets for a technology, good, or service as well as to create social impact. Appendix A offers more detailed guidance.

Advanced prize designs can reach a range of actors. For outcomes aligned to developing ideas, technologies, products, or services, designers typically focus on the participants who are creating models or tangible items to achieve a particular outcome. For prizes aimed at engaging people, organizations, and communities, designers are generally concerned with participants as well as a broader audience that may include people, groups, organizations, or even institutions.

As designers work with the elements of design to build a prize, they also consider its legacy. Using prizes or challenges more generally to achieve certain outcomes requires taking the long view. Designers evaluate how a prize will work with other problem-solving approaches, which their organizations may be able to deploy. They make plans to engage participants and broader audiences after the prize concludes to reinforce key messages, branding, or desired behaviors. They build post-prize activities and foster networking and learning opportunities to help participants strengthen and refine the innovations that were incented by the prize. When designers want to simulate markets, they may develop a series of challenges that pull participants through different stages of the innovation process—first a prize to produce, test, and improve a model and then perhaps an advanced market commitment to help winning participants gain traction in an emerging market. Designers who ignore their post-prize legacy when trying to assemble the elements of design risk undermining their own desired outcomes.

Developing ideas, technologies, products, or services

Attract new ideas: Solicit concepts and techniques

Prizes allow designers to identify and expand on fresh, innovative ideas. They can focus the efforts and ideas of lots of different people with widely varying viewpoints on a broad range of public problems. The prize can gather existing ideas, expand existing ideas, or help create new ones, especially if new participants are brought into the solution space, given additional resources, or stimulated with new ideas and connections. As Michael Smith from the Corporation for National & Community Service and formerly of the Case Foundation put it, “Prizes give you a way to lift up an idea.”68 Idea outcomes may take the form of:

Pithy taglines, such as the Federal Voting Assistance Program’s Slogan Contest, whose submissions could not exceed 15 words 69

Theoretical concepts, such as the Department of Defense Humanitarian Airdrop Prize, which sought white papers on how to drop food and water out of planes safely and effectively 70

Actionable business plans and detailed technical design specifications, such as the Institute of Justice Body Armor Challenge, which sought 30-page technical approaches for testing the integrity of body armor71

In order to generate useful submissions, effective designers often provide participants with context about why they are seeking ideas and what they intend to do with them. For example, the Rebuild by Design competition administered by the Hurricane Sandy Task Force used a multistage challenge to attract design proposals that increase the resiliency of regions affected by Hurricane Sandy. The designers quickly and effectively solicited concepts and communicated the end goal of employing the solutions to rebuild the Tri-State area.72 But, caveat emptor: The quality and workability of submissions will depend strongly on the selected design elements. The fundamental design challenge for this outcome is to strike the right balance between numbers of concepts and techniques solicited, processes used to review them, and plans for what happens to winning ideas. Outcome benefits

Tap the wisdom of the crowd : Prizes focused on attracting new ideas can allow organizations to quickly obtain new concepts from a broad community and provide a broad survey of possible approaches to solving a problem. As Guido Joueret of Cisco Systems explained, “We believed that by opening ourselves up to the wider world, we could harvest ideas that had far escaped our notice and in the process break free from the company-centric ways of looking at technologies, markets and ourselves.” 74

: Prizes focused on attracting new ideas can allow organizations to quickly obtain new concepts from a broad community and provide a broad survey of possible approaches to solving a problem. As Guido Joueret of Cisco Systems explained, “We believed that by opening ourselves up to the wider world, we could harvest ideas that had far escaped our notice and in the process break free from the company-centric ways of looking at technologies, markets and ourselves.” Take a big challenge in small bites : These prizes can be used to break a complex, ambiguous problem into smaller, less daunting parts. In some cases, prizes focused on attracting new ideas can help designers define the problem statement for a subsequent, bolder challenge.

: These prizes can be used to break a complex, ambiguous problem into smaller, less daunting parts. In some cases, prizes focused on attracting new ideas can help designers define the problem statement for a subsequent, bolder challenge. Customize problem solving: Prizes that reward ideas can be tailored for specific types of participants and problems. For example, the designer can use a broad problem statement, with open eligibility and robust marketing, to tap a large population of participants or opt for a specific problem statement with restricted eligibility to attract a highly skilled technical community.

Critical design elements Select your competitors. Designers typically seek one of three types of participants: the public; a broad mix of expertise; or specialized, often scientific, communities of interest. This choice strongly influences the quality and diversity of participant submissions, with the risk that a mismatch between the problem and participant pool may generate few workable ideas. To manage this problem, it can be helpful to use a technology platform associated with specific types of participants. Today, multiple online platforms can help facilitate and run prizes, such as InnoCentive, which solicits ideas from the scientific community, and Ashoka, which engages social entrepreneurs. Such platforms can tap into particular communities of interest, facilitate collaboration among participants, and support prize-related communications. (See Appendix D for a list of technology platforms.) Determine how you’ll use the idea. It’s tempting to measure challenge success simply by the number of responses. While it’s true that a large number of responses increases your odds of finding a good idea, the workability of those ideas is even more important. In the Stanford Social Innovation Review, Kevin Starr warns designers: “Most crowdsourced ideas prove unworkable, but even if good ones emerge, there is no implementation fairy out there, no army of social entrepreneurs eager to execute someone else’s idea.”75 The Air Force Research Lab (AFRL) provides a strong example of translating submissions into workable solutions. Specifically, AFRL challenges include submission evaluation criteria that can be validated and further refined through laboratory testing with a focus on the ultimate use of the idea.76Additional examples for designers include criteria to evaluate the maturity of submissions, the speed at which the submissions can be developed into prototypes or pilots, and the cost and ease of implementing submissions given an organization’s resource constraints. Be prepared to assess submissions efficiently. Good designers typically match the anticipated volume of submissions with an appropriate number of properly resourced judges. Given the relatively low barriers to entry for prizes seeking ideas, however, the sheer volume of submissions can sometimes surprise and even overwhelm. Designers can forecast the likely number of submissions by examining trends from past prizes, surveying the potential participant community, and sending invitations requiring RSVPs to targeted groups. To maintain credibility with participants and sustain interest in the prize, successful designers often seek to reduce judging time. Many employ a two-step screening process: a larger, less specialized staff conducts an initial review before passing on the most promising ideas to expert judges. This review process, however, must be transparent to avoid perceptions of unfairness. Recommended design tactics

Standardize submissions and clearly weight judging criteria : A common failure point of prizes seeking new ideas is unclear criteria for picking a winner. If judges must pick between apples or oranges, there is a higher risk that participants will dispute the results. In contrast, the FTC Robocall challenge made judging criteria especially straightforward. The three evaluation questions and corresponding weighting included: 1) Does it work? (50 percent), 2) Is it easy to use? (25 percent), and 3) Can it be implemented? (25 percent). To level the playing field with individual participants, the FTC developed a separate track for organizations with 10 or more employees. 77

: A common failure point of prizes seeking new ideas is unclear criteria for picking a winner. If judges must pick between apples or oranges, there is a higher risk that participants will dispute the results. In contrast, the FTC Robocall challenge made judging criteria especially straightforward. The three evaluation questions and corresponding weighting included: 1) Does it work? (50 percent), 2) Is it easy to use? (25 percent), and 3) Can it be implemented? (25 percent). To level the playing field with individual participants, the FTC developed a separate track for organizations with 10 or more employees. Use multiple rounds : Prizes that focus on attracting new ideas increasingly feature multiple rounds to winnow the best submissions before final award.

: Prizes that focus on attracting new ideas increasingly feature multiple rounds to winnow the best submissions before final award. Consider shorter and smaller challenges : Prizes seeking new ideas typically employ smaller purses and shorter competition lengths than those seeking other outcomes. This is justifiable to participants due to the lower level of effort required.

: Prizes seeking new ideas typically employ smaller purses and shorter competition lengths than those seeking other outcomes. This is justifiable to participants due to the lower level of effort required. Design the prize with the end use in mind: Clearly communicating how winning ideas will be used can improve participation and spur participants to generate particular types of ideas. By linking ideas to the organization’s larger mission, designers can build stronger, deeper, and more lasting connections with the communities that generate them.

G-20 SME Finance challenge

Leaders of the G-20 countries in partnership with Ashoka Changemakers launched the Small and Medium Enterprise (SME) Finance Challenge to solicit groundbreaking ideas on how public interventions can unlock private finance for SMEs across the world.

Designers knew how these new ideas would be used after the challenge—the G20 countries created a $558 million fund to scale and support these new ideas. The short time period between start and submission (only 41 days) as well as a $1,000 early entry prize maintained momentum to increase the number of participants.

Challenge designers lined up eight well-respected judges to work through the 333 participant submissions. As a non-monetary reward, the challenge winners attended the G-20 Seoul Summit as well as an SME conference in Germany.78

Build prototypes and launch pilots: Produce, test, and improve models

For prizes seeking to build prototypes or launch pilots, the goal is not simply to generate an idea that addresses an important public problem, but rather to realize a functional version of a technology, product, or service, and sometimes test it with its intended customers. Building prototypes or launching pilots often entails the creation of new technologies and can be particularly effective for shepherding them through late-stage research and early-stage development, a difficult part of the innovation lifecycle sometimes called the “valley of death.”79 For example, the My Air, My Health Challenge run by the EPA and the HHS spurred the creation of sensor prototypes measuring pollution’s health impacts, but also required participants to demonstrate how environmental agencies and individual citizens could put these systems into practical use.80 This outcome is particularly attractive because it can provide access to a new range of useful products and services, while requiring the organization to pay only for those that meet its needs. Prizes leading to products have the added benefit of relatively quantifiable and objective metrics of success. Designers focused on services can also require practical demonstrations of success. For example, in New York City, a School Choice Design Challenge recently asked participants to develop a new software application to help families select high school programs. If a winning app is selected, it will make it easier for New York City eighth graders to choose among more than 700 high school program options each year.82 An important consideration for designers focused on this outcome is providing participants access to facilities to test prototypes. The cost and logistical challenges of creating an environment to iterate upon solutions is a significant barrier to entry that can stifle innovation. Designers focused on this outcome should consider providing access to testing facilities in order to place the focus of participants on research, innovation, and ideally future commercialization.83 For example, the Wendy Schmidt Oil Cleanup X CHALLENGE asked participants to develop solutions to clean surface oil from seawater. The challenge was valued at $1,400,000 and provided participants an opportunity to test their work at the National Oil Spill Response Research & Renewable Energy Test Facility. Designers seeking to build prototypes or launch pilots should pay careful attention to problem definition as well as particular elements of design, such as motivators and structure. Expert designers can spend months in defining the technical problem, so that the prize is appropriately bounded. The Centers for Medicare and Medicaid Services’s Provider Screening Innovator Challenge, which asked competitors to develop screening software programs to help ensure that Medicaid funds are not diverted from the most vulnerable Americans, required more than a year to develop and ultimately involved 124 “mini-challenges” to attract the right solutions.84 Motivators and structure also matter because prize designers need to ensure that they attract the right kinds of participants, and that those participants are encouraged to compete in the right ways. Designers will often carefully study the motivations of distinct participant groups, including startup companies, large corporations, and academics, to ensure the challenge appeals to those most likely to compete. Outcome benefits

Develop new intellectual property : Building prototypes or launching pilots can require significant time and money, especially when designers seek solutions that serve a public good, but are not attractive to commercial markets. Designers can overcome these barriers through a variety of incentives, including attracting investment capital, encouraging merger and acquisition activity, and building market awareness. One of the winners of the USDA’s Healthy Apps for Kids challenge used the momentum of the prize to develop a commercial opportunity. The media coverage surrounding his winning solution led to a for-profit version, with partners providing licensing and advertising opportunities. 86

: Building prototypes or launching pilots can require significant time and money, especially when designers seek solutions that serve a public good, but are not attractive to commercial markets. Designers can overcome these barriers through a variety of incentives, including attracting investment capital, encouraging merger and acquisition activity, and building market awareness. One of the winners of the USDA’s Healthy Apps for Kids challenge used the momentum of the prize to develop a commercial opportunity. The media coverage surrounding his winning solution led to a for-profit version, with partners providing licensing and advertising opportunities. Engage external viewpoints to test ideas : A prize can be a valuable tool for organizations that lack the internal capabilities to develop a prototype or pilot. Such prizes allow public agencies to tap into a diverse array of experts, tinkerers, inventors, and investors to achieve results beyond their own means.Consider the daunting task of designing dexterous, yet durable gloves for spacewalks. In 2009, NASA’s Astronaut Glove Challenge asked participants to improve space suit glove design to reduce the effort needed to execute tasks and improve the durability of the glove. Using a challenge allowed NASA to engage external participants to reimagine design and build a proof of concept. 87

: A prize can be a valuable tool for organizations that lack the internal capabilities to develop a prototype or pilot. Such prizes allow public agencies to tap into a diverse array of experts, tinkerers, inventors, and investors to achieve results beyond their own means.Consider the daunting task of designing dexterous, yet durable gloves for spacewalks. In 2009, NASA’s Astronaut Glove Challenge asked participants to improve space suit glove design to reduce the effort needed to execute tasks and improve the durability of the glove. Using a challenge allowed NASA to engage external participants to reimagine design and build a proof of concept. Clarify your requirements: The design process for prizes that build prototypes or launch pilots can involve a broad community of potential participants (for example, companies, nonprofits, universities, and individuals), spurring them to examine technical requirements and determine the breakthroughs needed to achieve them. By defining success for a specific problem, prize designers can help a community of participants coalesce around critical technical or programmatic specifications.

Critical design elements Be prepared with market analytics. Organizations often seek technical solutions unavailable in the commercial market. In these cases, prize development may require a relatively high operational budget to conduct a landscape review of immature market players, craft the problem statement, and design selection criteria. Partners that could make money from winning prototypes and are willing to invest in the prize can help cover some of these costs. Tailor the purse to competitor risk and market conditions. To set the purse appropriately, designers typically investigate the costs of solution development as well as the potential market value of the new product or service. This requires economic and market analysis, a capability many public organizations lack and therefore engage vendors to complete. The purse does not need to cover the entire cost of development, particularly if outside investors are interested in supporting participating teams, but it does need to cover at least some of the risk participants assume. If only a small purse is possible, designers can supplement it with other non-monetary benefits, such as access to data, strong intellectual property protections, and introductions to venture capitalists. Remember, though, that commercial participants are unlikely to devote money or time to develop new products or services unless they believe they can sell them into an existing or emerging market. Make sure the winner selection is unambiguous. The selection criteria for the winning submission should be quantitative, rigorous, and testable, particularly for prizes with a technological focus. In the course of prize design, it is helpful to develop, vet, and test criteria with outside experts and potential participants and partners to avoid having to revisit selection criteria during the course of the prize. Recommended design tactics

Hold mini-challenges : Challenges for new algorithms are common and are increasingly being split into measurable mini-challenges