Online labor markets, such as Amazon's Mechanical Turk, are often used to crowdsource simple, short tasks like image labeling and transcription. However, expert workers, who tend to be busy putting their knowledge to work in traditional employment, are less likely to participate in microtask systems. It can be hard to find a sizeable number of them via online labor markets. This can make it difficult to use a Turk-like platform to complete complex jobs that require specific skillsets -- which means overall low wages and impact for crowdsourcing labor markets.

Communitysourcing helps solve this problem by bringing the crowdsourcing work to crowds of expert workers directly with kiosks that provide tasks and rewards situated within these expert's physical communities. As an example, we implemented Umati, the Communitysourcing Vending Machine, a vending kiosk with a touchscreen interface. We placed Umati within a specific community (e.g., students), provide work that fits their skill-set (e.g., grading intro-level exams), and reward them with items they find personally valuable (e.g., candy). We found that Umati was able to generate more consistent and accurate grades than those we could generate using Mechanical Turk, at less cost than paid expert graders.


Kurtis Heimerl, Brian Gawalt, Kuang Chen, Tapan Parikh, and Björn Hartmann. Communitysourcing: Engaging Local Crowds to Perform Expert Work Via Physical Kiosks. In Proceedings of CHI 2012. ACM, New York, NY, USA, 2207-2210.
ACM Digital Library | Local pdf

Talk Slides: PPT file | pdf


Kurtis Heimerl
Brian Gawalt
Kuang Chen
Tapan S. Parikh
Björn Hartmann


Demo Hour
ACM Interactions, December 2012

Hacked Vending Machine Trades Snacks For Skills
Tech News Daily. April 27th, 2012



The Umati communitysourcing kiosk. The touchscreen at upper right prompts local experts for tasks and rewards workers with candy when they've performed enough labor.

A schematic view of the four major components of Umati: touchscreen interface, cardreader for worker ID, laptop brains, and the Arduino connection to the vending machine's dispensary motors.

The Grading Task


Folks passing by Umati (as set up outside the 306 Soda Hall auditorium in the CS department) were asked to grade exam questions: they were to read the question text, the instructor's solution guide, and a scanned copy of the student submission, then use the slider to select a score from 0 to 4 out of a possible 4 points.

Traditional Grading Benchmark

We enlisted a crew of ten graduate students, all with teaching experience, to grade the same sample exams. As grading is a pretty subjective activity, we chose the median expert score given to each exam item to be the ground-truth score the item "truly" deserved.

Umati v. MTurk

Exam scores aggregated from Umati users were phenomenally likely to match the ground-truth score yielded by experts, especially when compared to workers recruited from Mechanical Turk.