Welcome!

Hollis Tibbetts

Subscribe to Hollis Tibbetts: eMailAlertsEmail Alerts
Get Hollis Tibbetts via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: SOA & WOA Magazine, IT Strategy

Article

Maximizing Crowdsourcing Success

Strategies for minimizing risk and maximizing success

After the publication of "Crowdsourcing - A Best Practice or a Worst Practice", I had some back and forth conversations with Jim McKeown and Jack Hughes (Communications Director and TopCoder founder, respectively) from TopCoder on the merits and the shortcomings of the article.

They had some significant value to add in this area - especially where it concerned the "questions to ask if you're considering crowdsourcing" - a list of questions designed to help readers ascertain if crowdsourcing would be appropriate for them, and if so, strategies for minimizing risk and maximizing success. I felt that the additional detail and viewpoint would be of great value to readers, so I invited Jack to comment and expand on the original article.

Rather than repeat the entire article, I've only reproduced a few key sentences and added some sub-headings to provide context for Jack's comments and to tie those comments back to the original article.

Any additions (such as text reproduced from the previous article, sub-headings inserted, or a conclusion at the end) are italicized for easy identification.

The author is not affiliated with TopCoder, Inc., and has not received any compensation for this article.

About the term "crowdsourcing"
Although TopCoder uses it as general definition, we aren't big fans of the term crowdsourcing. At TopCoder we view ourselves as part of a community and believe that carries certain responsibilities that we take seriously. We think that can get lost in the general term.

The Ethics of Crowdsourcing
We think the part of your article that deals with ethics will become increasingly important to the space.

While we would agree that this is a large area of discussion (and likely study) and that your overall comments are consistent with our approach, I am concerned that the crowdsourcing space has much room for mischief in the short term. I think companies that are ethical and conduct themselves appropriately will win out over the long term, but over the short term as people enter these markets there can be a large temptation to take advantage - particularly in the absence of rules.

Not having clearly defined rules for who will get paid what, when and under what circumstances can lead to misunderstandings at its most innocent and participants being taken advantage of at its most egregious. This needs to happen *before* any activity is performed. Transparency is also key here. Participants - particularly in competitive situations - should know who they are competing against and have as much information about their competitors as possible so that they can make an informed choice whether to participate or not. They should understand the method of adjudication and what recourse they have to clear misunderstandings or deal with problems.

They should be able to understand (in detail) why their submission was accepted or not. Finally, we are strong advocates that the ownership of any work product remain with the contributor until consent of the participant is given for transfer (presumably because the task is ended and remuneration assured). This last step puts the onus on the platform provider (or user) rather than the participant. This at least gives the participant leverage if a platform provider tries to change the rules after the fact or renegotiate.

While there are real economic benefits to a crowdsourcing approach, they should not come from the exploitation of workers, but rather through efficiencies in matching skill with need, process improvements leading to quality differentiation and the ability to reach outcomes not previously possible.

Ethical and fairness arguments often turn on remuneration. It is important to emphasize, as you note, that there are non-remuneration incentives as well. Even though they are non-remunerative, there are still ethical considerations. TopCoder, for instance, runs more non-commercial competitions than commercial. These competitions can affect a person's ability to find a job. As a company we need to be aware of this and handle our member's data and performance statistics accordingly. As you state, rules (for who will get paid what, how an individual's reputation will be affected, when and under what circumstances) and the definition of the relationship should be clearly defined up front - prior to any work being done or having an individual participate in an activity that may affect them. For now, I hope companies - and the customers who use them - will take this into consideration when choosing a crowdsourcing platform.

Questions to Consider (to determine if crowdsourcing is right for you, to minimize risk, maximize success)
I'd like to take some time here to describe our approach to some of the questions you pose at the end of your article under risks.

Even though there are proven ways to mitigate many of the risks - and some of the mitigation approaches can be quite sophisticated and effective, they do not obviate the very appropriate maxim you quote: use the right tool for the job.

Virtual models are not a panacea: they will not fix all of your ills for little money and no effort. Crowdsourcing, used appropriately, can be a very powerful tool. It has been our experience at TopCoder that learning to use these tools *and* applying them appropriately is necessary for success.

The questions you pose are each probably worth an article in and of themselves. TopCoder has spent many years honing solutions to some of these. A brief suggested approach follows each question.

[author note: questions are reproduced from original article and are italicized, suggested approach is from Jack Hughes]

How much effort is required to appropriately define the problem as well as requirements for the solution...up front

We have found that problem definition is the single most important determinant in outcome. Whether an algorithm, code design, component or something as simple as a logo, having a well defined structure and definition allow people to understand what you are looking for.

You should look for structured platforms that have an included ability to help you (at least initially) define problems and communicate what you would like. Ideally, a well-designed platform will help you define the problem, set a price and provide feedback on the likelihood of success in getting a solution.

How much effort will it take to manage the crowdsourcing process?

We see the crowdsourcing space as generally bifurcated: task based where you are looking for a specific task to be done once or many times (99designs, Mechanical Turk, etc.) and solution based where complex systems, algorithms and scientific discovery can be built and performed (TopCoder, InnoCentive, respectively).

While there is some overlap between the two models, you can do web page design on Mechanical Turk and can build a logo on TopCoder, they are differentiated by their ability to handle complexity. Much of this has to do with the management models employed. TopCoder, for instance, employs a sophisticated decomposition model for the breaking down of large complex systems into small pieces along a number of dimensions (creative, software design/build and analytics (algorithms)).

The idea here is that as much management as possible should be handled by the platform itself (project management is a function of TopCoder through member co-pilots). A good rule of thumb is that if a process is task based, you will be doing more of the management. Larger projects should have management support built into the platform you are using.

How much calendar time will be expended if the crowdsourcing process fails to yield a useful solution?

For relatively simple tasks (logos, copy, etc) it should be clear within days or weeks whether or not the process has yielded a successful result. For larger projects, the platform should have a mechanism that is analogous to a traditional project plan (in TopCoder this is called “Gameplanning” and is a formal process run by community members).

Our preference is for outcome based management models. For instance, a Gameplan differs from a traditional project plan in that it does not track resources or effort, just results in the form of completed contests. So, a Gameplan is the series of contests (with fixed costs and timeframes based on historical data) for the set of contests that make up a project. A Gameplan can span anywhere from hours (for small tasks such as a testing competition) to months (for the development of a sophisticated platform such as an e-commerce offering.)

What is the impact on the business if crowdsourcing doesn't work for this particular problem?

Here we suggest companies start small and develop an internal knowledge of what works best for them. If there is a project or task that is on the critical path, managers would be wise not to start there. Over time as internal staff develop the requisite skills for the integration of crowdsourcing platforms, more and more critical tasks can be taken on.

Of course, there is somewhat of a chicken and egg phenomenon here: many times a company is looking to crowdsourcing to alleviate problems with other approaches. Still, even though a well designed and internally integrated platform may help, it takes some time. People should not try to bite off more than they can chew.

What is the impact on the business of using internal people instead of crowdsourcing? What is the "opportunity cost" of using internal people?

From a business perspective, this can be extremely difficult for organizations of all types and sizes. In another way that crowdsourcing is analogous to outsourcing, the prospect of taking what was historically the work of internal staff and moving it out can be extremely nerve-racking and disruptive to an organization. There certainly is an opportunity cost to not taking advantage of at least some of the benefits of virtual models, but this needs to be weighed against the cost of disruption to an organization.

Crowdsourcing platforms should help people get more done for an organization while allowing internal people to concentrate on high-value contribution. You may not relish documenting or testing a system, but, believe it or not, there are people out there who enjoy this type of work immensely and have elevated it to a master art. I hate to admit it, but TopCoder has allowed me to get extremely lazy while getting more done than I ever thought I could with limited resources.

Can you define "pull the plug" points for a crowdsourcing project? For example, how many parties express interest in working on your problem?

A well-structured platform should be optimized for outcome *and* effort. It does you no good to create a contest and then have no one participate or receive poor quality submissions, particularly where you are considering a crowdsourcing platform as a long-term addition to sourcing needs. The platform itself should inform you of the elements required to receive an optimal number of good quality submissions.

TopCoder has over 40 different contest types and optimal participation metrics for each. It turns out for instance, for the best experience for both customer *and* member, the optimal number of submissions for a component development contest is 2. This may seem counterintuitive (many customers expect that there will be tens or hundreds of submissions for every type of contest), but it turns out that to get to very high reliability (95%+ at defined quality metrics) this is all that is needed; any more would be a waste and leave a bad taste to participants who have submitted but do not win. Not coincidentally, TopCoder pays first and second place for component development competitions. Other competitions are different, sometimes paying into the tens or even hundreds of places.

Can you establish some preliminary indicators that allow you to predict success likelihood for the final solution?

As stated above, a well-designed platform should have prediction capability built in . There are a number of factors that go into achieving a successful outcome (problem type, problem definition, price, brand of company performing the work, interest level of the work, size of participant field (community), even time of year, month, week and day can make a difference in outcome. Whatever platform you are using should take all of this into account and be able to give you some ability to pull levers based on the type of outcome -task, solution components and overall solution).

Figure 1: Example screen shot of prediction capability for one of TopCoder's contest types. Note correlation between number of predicted submissions and actual submissions. (click for full-size)

How do you decide between "pull the plug" vs. "go back and try to remedy the situation" if the leading indicators aren't looking good?

Particularly for those new to the process, contests are going to fail. The “pull the plug” decision really depends on the nature of the task and why it appears that there will be no submissions or that they will be of low quality. Ideally, the platform would be able to make recommendations as to price for the task and what can be expected for participation. If the content of the problem spec is lacking (relatively often for new users), the remedy is more tricky.

At TopCoder, we use a rule of thumb that states if a contest has to be ‘reposted’ more than once, it is likely that price isn’t the issue. Participants vote with their feet. If they cannot figure out what you are trying to do, they simply won’t participate. If this is critical work, we would suggest that it be pulled and done internally and that something of a less critical nature be done to learn the process of problem definition.

In the beginning, there is a tendency to define things too broadly. This is human nature. But another good rule of thumb is that the more distinct the task, the more likely the participation thus driving a good outcome. TopCoder has processes (community based) to review specs before they are posted. This can be invaluable to new users.

Can you define "success"? Do you have criteria in place for judging and testing solutions?

Defining success is not just important to ensure customer satisfaction (although that is important for all of the obvious reasons), but is important as an indication to participants that you are serious.

A healthy ecosystem is supported by two things:

1) That people are reasonably sure that they will be paid fairly for their contributions if they provide a quality solution and;

2) That what they are providing is of value to the customer.

Crowdsourcing is not any different in this regard than any other form of work. People want to feel that good work is appreciated and they want their name and reputation associated with value.

TopCoder has many mechanisms for the definition of success from client choice for creative output (graphics, app design, logos, etc) to automated scoring and testing of algorithms to rigorous score-card based peer review. All of these elements are built into the platform itself.

 

Conclusion
What TopCoder has managed to do is to create a software platform which embodies best practices for maximizing the success of the crowdsource model, as well as build a community and culture around that platform. In this author's opinion, the confluence of the two creates a significant force. The software platform empowers the community; the community strengthens the software platform. The two are syngergistic in nature, creating something whose effect is greater than the sum of the individual effects.

As a strategy for leveraging the inherent power of the crowdsource model, while minimizing (or at least measurably reducing) exposure from some of the weaknesses of the model, TopCoder's community and software platform are definitely a best practice.

Irrespective of how (or if) an organization implements a crowdsource project, the detail provided and points raised by Jack Hughes provide some excellent insights into deciding if crowdsourcing is the right thing for your project, and (if it is) - how to move forward.

More Stories By Hollis Tibbetts

Hollis Tibbetts, or @SoftwareHollis as his 50,000+ followers know him on Twitter, is listed on various “top 100 expert lists” for a variety of topics – ranging from Cloud to Technology Marketing, Hollis is by day Evangelist & Software Technology Director at Dell Software. By night and weekends he is a commentator, speaker and all-round communicator about Software, Data and Cloud in their myriad aspects. You can also reach Hollis on LinkedIn – linkedin.com/in/SoftwareHollis. His latest online venture is OnlineBackupNews - a free reference site to help organizations protect their data, applications and systems from threats. Every year IT Downtime Costs $26.5 Billion In Lost Revenue. Even with such high costs, 56% of enterprises in North America and 30% in Europe don’t have a good disaster recovery plan. Online Backup News aims to make sure you all have the news and tips needed to keep your IT Costs down and your information safe by providing best practices, technology insights, strategies, real-world examples and various tips and techniques from a variety of industry experts.

Hollis is a regularly featured blogger at ebizQ, a venue focused on enterprise technologies, with over 100,000 subscribers. He is also an author on Social Media Today "The World's Best Thinkers on Social Media", and maintains a blog focused on protecting data: Online Backup News.
He tweets actively as @SoftwareHollis

Additional information is available at HollisTibbetts.com

All opinions expressed in the author's articles are his own personal opinions vs. those of his employer.