When running a crowdsourcing contest, which strategy makes the most sense?
- Entry with most votes wins
- Select winner from top N (e.g. 20) most popular entries
- Pick the winner from what you like most, regardless of crowd feedback
Perhaps the best way to answer this question is, “It depends.” Depends on the objective of the contest, the nature of the submissions and who is doing the voting. In the post Four Models of Competitive Crowdsourcing, there is a discussion of these various conditions.
For purposes of the analysis below, I’ll refer to the recent Enterprise 2.0 Conference Call for Papers. That was a crowdsourced initiative to select some of the sessions for the upcoming conference.
I generally think the best approach is this:
In this model, the crowdsourcing ends with selection by experts. This will be the most common form of crowdsourcing in the future. It’s a good blend, with the crowd participating and the sponsoring organization having the ultimate say. It’s how Cisco is running its I-Prize competition. It’s how Mountain Dew is running parts of its edgy Dewmocracy, such as the crowdsourced package designs.
Each of the elements offers its own value, which is important for both sponsoring organization and participants to understand.
The opening phase is to get the ideas, proposals, content, ads, etc. These ideas are the opening, the critical intellectual and artistic capital. Whether it’s open innovation or smart marketing, crowdsourcing initiatives tap a wider range of ideas than companies will generate on their own.
Several factors relate to getting meaningful value from this process:
Targeted community. A great value in crowdsourcing is the ability to tap the minds of people who share an interest with the sponsoring organization. Creating a call for their ideas is a motivator for getting new, good feedback. While not always the case, the “right” people will respond to a sponsoring organization’s crowdsourcing call. In the Enterprise 2.0 Conference call for papers, you didn’t see java coding geeks or super fans of Justin Bieber posting submissions. It was a crowd of people with common interests in the subject, drawn from around the world.
This self-identification is an important point, and one that sometimes gets overlooked.
Actionable outcomes. Once the crowdsourcing initiative is complete, what are the next steps? Which ideas were selected, and how are they put into play? This may seem obvious, but it’s not always clear. The initial focus is to tap the crowd, without thinking through what to do with their submissions. Without actionable outcomes, all you’re doing is setting up a glorified discussion forum.
Motivation. Why should anyone submit to your crowdsourcing initiative? What’s in it for them? This is a critical, fundamental consideration. If you can’t paint a clear picture for participants, you haven’t established the raison d’être for your crowdsourcing event.
Motivation is very much tied to your targeted community. If a pre-existing community at large has already coalesced elsewhere, the social aspect of participation is a motivation.
Here’s a spectrum of motivators:
Altruism may be best characterized as a willingness to give up your time to contribute without any expectation of being rewarded. It’s noble, but it’s one of the lowest motivators for participation. A recent idea effort by global PR firm Fleishman-Hillard asked participants to submit ideas, but lacked motivations for participation – actionable outcomes, rewards, social incentives, game mechanics, etc. Predictably, participation was low (23 ideas, 37 participants). While Daniel Pink’s Drive focuses on intrinsic motivations, he does not assume a future in which we all operate in an altruistic fashion.
Intangible benefits are those in which you know you’re getting value, you just can’t put a number to it. These benefits can be very personal, or they can present obvious, if unpredictable value. They can include the prestige of doing well, which goes on the resume. Or demonstrating expertise, as in Wikipedia.
Tangible benefits can include financial rewards. The Cisco I-Prize awards $250,000 for the best product idea. Those for yourself, or for others, such as the Pepsi Refresh campaign to raise money for different causes. Tangible benefits don’t have to be monetary. Seeing your code suggestion make it into the next Linux system can help make your work better with the open source OS.
In the case of the Enterprise 2.0 Conference Call for Papers, the tangible reward is the chance to share your learning or to get your message out to a targeted audience. In an arena where information and awareness are critical, this is a significant value.
Diversity. While you need to reached a targeted community to spark contributions to a crowdsourcing initiative, that doesn’t mean you’ll get everyone submitting from the same playbook. Diversity of input is a valuable benefit of crowdsourcing. Casting a wide net generally find variations in people’s perspective and ideas. And the web is ideal this. As James Surowiecki said recently in an HP webcast:
The other thing I would say in terms of the individual, in terms of diversity, I think is really interesting is, the Internet for me is actually one of the, if not the, greatest tools for promoting diversity of thought that’s ever been invented.
Indeed, tapping this diversity is a key strategic value of crowdsourcing versus relying exclusively on a smaller, more homogeneous group.
Rules. Going into a crowdsourcing event, it needs to be clear how winners will be selected. What is the basis for selection? Who does the selecting? How many winners will there be?
A clear set of rules sets expectations, and ensures everyone understands the final results. If there was anything that could be improved in the Enterprise 2.0 Conference Call for Papers, it was clarifying the rules governing the selection process.
Sourcing ideas is the first part of crowdsourcing’s value, followed by…
Polling a community is the other source of value in crowdsourcing, via their votes and comments. What do they think of the ideas and submissions?
Feedback offers these benefits:
- Multiple different ways to “see” an idea and its angles
- Refinement of
submitted ideas and proposals
- Straw poll of market interest
- Self-identified talent pool of potential collaborators
There are several keys to getting the most from crowdsourced feedback.
Reward feedback. Feedback on an idea can be just as valuable as the idea itself. For the individual, feedback validates the value of an idea, or provides feedback to improve it. For the sponsoring organization, feedback helps identify the best submissions.
Rewards can be associated with both the quantity and quality of feedback. That second point – quality – is an important aspect. In any site where rewards – tangible or intangible – are available, inevitably some people will game the system. That’s where quality becomes the primary driver of recognizing feedback. Have the ability to track the reputations of participants, and recognize those offering the most value.
Leverage all forms of feedback. Digital platforms are amazing. All activity that occurs is tracked to make the system work. This activity includes both the explicit types of feedback – votes, comments – as well as the implicit feedback – views, levels of interaction, watching and sharing submissions, etc.
Intelligent usage of this implicit feedback provides much richer analytics for identifying top submissions, as well as top contributors. Doesn’t just have to be a post-n-vote site.
Enable connections and private messaging. Once on the site, people will naturally find others who share interests. For some crowdsourcing initiatives (not necessarily all), social networking is a part of the experience. Allow people to connect with others and follow their activities. Let them also message one another easily. This way they connect with the sponsoring organization, as well as with each other.
Focus on the top [N] most popular submissions. The crowd is a great filter. With a targeted community, sponsoring organizations get a straw poll of what’s important to the market. However, a misconception some people have is that the winner of a crowdsourcing initiative is the one receiving the most votes.
While that may be appropriate in some marketing-oriented events, it generally is not a recommended strategy. Rather, the winner(s) of a crowdsourcing initiative should be drawn from the set of the most popular submissions. This ensures that the crowd’s feedback is an important part of the decision-making process, while reserving decisions about “fit” for the sponsoring organization. It also allows organizations to distinguish between those with large followings and submissions which garner a more viral/organic level of interest.
The Enterprise 2.0 Conference did this, moving the top 100 submissions into the second stage of the crowdsourcing competition. From there, 12 sessions were selected, not necessarily the top vote-getters.
The recommendations laid out here come from seeing a number of crowdsourcing efforts. Consider these good starting points for tapping large communities for submissions and feedback.
(Cross-posted @ Spigit Blog)