I feel like everybody I know has a project they’re trying to raise some money for on Kickstarter or Indiegogo. And why not? If people on the internet are willing to fund a 50-foot long mechanical snake or a giant statue of Robocop, you can safely assume that there are a few folks out there willing to throw some money at your idea, no matter how ridiculous it may be.
Crowdsourcing - tapping into the vast population and resources of internet users - can be a really efficient fundraising strategy, and thankfully it seems to be just as effective for reasonable goals as it is for silly ones. In fact, a bunch of people in my neighborhood recently raised over $25,000 in crowdsourced donations to go toward the medical bills for one of their friends, in only a matter of weeks! And several breweries have used crowdsourcing to raise money to get up and running. But beyond generating cash flow, crowdsourcing can also be very useful for scientific research.
A group of investigators in Canada recently used crowdsourcing to collect data for a study which categorized the emotions elicited by nearly 10,000 different words (this figure neatly summarizes their findings in a “wheel of emotion,” which could totally be a game show). Over 2000 people participated in the study through Amazon’s Mechanical Turk platform, which is a popular service for crowdsourcing tasks, including scientific experiments. Participants get paid for their time, and researchers get a huge pool of potential subjects for their studies. This kind of data collection simply wasn’t possible 20 years ago, and will save researchers and participants both time and money. Everybody wins!
Another team of scientists is crowdsourcing the actual analysis of some of their data. Neuroscience research can generate mountains of information, and with the recent announcements of the BRAIN Initiative in the United States and the Human Brain Project in Europe, there could very well be more brain data to crunch than there are researchers available for crunching (that sentence made me want a candy bar). One way to deal with this is to break down the analyses into smaller, more manageable tasks, then teach a lot of people how to do those tasks, using something like a massive online open course program like those at Northwestern or MIT. Training a horde of research assistants frees up a ton of time and resources for scientists, and it gives those research assistants some skills and experience moving forward. Everybody wins again!
At this point, a skeptical reader might have raised several silent objections to my repeated declarations that everybody wins when using crowdsourcing in scientific research. And, skeptical reader, you are absolutely correct! At least, you’re correct if your objections are the same as the ones I’m about to list (though there are probably other reasonable objections that I haven’t thought of yet, in which case, tweet me @jimkloet or add to the comments below).
Ahem. So in regard to crowdsourcing the collection of data, one big concern is that, since you never see your crowdsourced participants/analysts in real life, you can never be sure that they are who they say they are, or that they’re doing what they’re supposed to be doing. Their user profile may state that they are a 30-year-old man living in Chicago with his wife and dog, but in reality he could be, I dunno, a 40-year-old man living in Chicago with his brother and two cats. Who knows? Not you!
Of course, there are ways to verify your participants’ personal information, depending on how much money you want to spend in the process. And in some studies it may not matter if your participants really are who they claim to be. But generally, you can’t get the same kind of experimental control using crowdsourced participants as you would in the lab, so keep that in mind.
There is also the concern that your participant is not doing what he or she is supposed to be doing in the task, so the data you’re collecting won’t actually be an answer to your experimental question. You may not believe this, but there are apparently some jerks who spend a lot of time on the Internet. Some of these jerks may even participate in online crowdsourced research studies, but instead of playing by the rules of the experiment, they play by their OWN rules, pressing buttons how THEY want to press them, cuz that’s how they roll. But you get the occasional jerk in lab studies too, except then you have to deal with their jerkiness in person, which I think is much worse. So there’s a tradeoff there as well.
As far as crowdsourcing the analysis of data, a lot of those just-mentioned objections apply here as well. Again, you can have the issue of misrepresentation, like if somebody says they have a degree in chemistry, when in reality they just watch a ton of Breaking Bad. And again, there are jerks out there, who intentionally mess things up, just for fun. But the same safeguards apply here as they do for data collection, which underscores an important point about research in general: no single method of data collection or analysis is perfect, either in the lab or over the Internet, so it behooves good scientists to always be careful when running experiments and analyzing data, especially when it involves real people.
I’m always going to get excited when scientists figure out ways to integrate new technology into their research practices, so crowdsourcing in science really appeals to me. I suspect that systems like mTurk will only improve from this point forward, and that we will see a lot more research looking at the differences between lab-based and crowdsourced studies. And who knows, with all of the cuts in science funding, maybe someday I’ll be forced to turn to the Internet to Kickstart a research project! I just hope that people find my work to be as interesting as a giant Robocop.