ABSTRACT
With the abundance and ubiquity of mobile devices, a new class of applications, called spatial crowdsourcing, is emerging, which enables spatial tasks (i.e., tasks related to a location) assigned to and performed by human workers. However, one of the major challenges with spatial crowdsourcing is how to verify the validity of the results provided by workers, when the workers are not trusted equally. To tackle this problem, we assume every worker has a reputation score, which states the probability that the worker performs a task correctly. Moreover, we define a confidence level for every spatial task, which states that the answer to the given spatial task is only accepted if its confidence is higher than a certain threshold. Thus, the problem we are trying to solve is to maximize the number of spatial tasks that are assigned to a set of workers while satisfying the confidence levels of those tasks. Note that a unique aspect of our problem is that the optimal assignment of tasks heavily depends on the geographical locations of workers and tasks. This means that every spatial task should be assigned to enough number of workers such that their aggregate reputation satisfies the confidence of the task. Consequently, an exhaustive approach needs to compute the aggregate reputation score (using a typical decision fusion aggregation mechanism, such as voting) for all possible subsets of the workers, which renders the problem complex (we show it is NP-hard). Subsequently, we propose a number of heuristics and utilizing real-world and synthetic data in extensive sets of experiments we show that we can achieve close to optimal performance with the cost of a greedy approach, by exploiting our problem's unique characteristics.
- Amazon mechanical turk. www.mturk.com.Google Scholar
- Crowdflower. www.crowdflower.com.Google Scholar
- Gowalla. www.wikipedia.org/wiki/Gowalla.Google Scholar
- University of california berkeley, 2008-2009. traffic.berkeley.edu/.Google Scholar
- O. Alonso, D. E. Rose, and B. Stewart. Crowdsourcing for relevance evaluation. SIGIR Forum'08, 42(2):9--15. Google ScholarDigital Library
- F. Alt, A. S. Shirazi, A. Schmidt, U. Kramer, and Z. Nawaz. Location-based crowdsourcing: extending crowdsourcing to the real world. In NordiCHI '10, pages 13--22. Google ScholarDigital Library
- M. Bulut, Y. Yilmaz, and M. Demirbas. Crowdsourcing location-based queries. In PERCOM Workshops'11, pages 513--518.Google Scholar
- C. S. Campbell, P. P. Maglio, A. Cozzi, and B. Dom. Expertise identification using email communications. In CIKM '03, pages 528--531. Google ScholarDigital Library
- C. C. Cao, J. She, Y. Tong, and L. C. 0002. Whom to ask? jury selection for decision making tasks on micro-blog services. PVLDB'12, 5(11):1495--1506. Google ScholarDigital Library
- A. Doan, R. Ramakrishnan, and A. Y. Halevy. Crowdsourcing systems on the world-wide web. Commun. ACM'11, 54(4):86--96. Google ScholarDigital Library
- D. Du, K. Ko, and X. Hu. Design and Analysis of Approximation Algorithms. Springer Optimization and Its Applications. 2011. Google ScholarDigital Library
- A. Dua, N. Bulusu, W.-C. Feng, and W. Hu. Towards trustworthy participatory sensing. In HotSec'09, pages 8--8. Google ScholarDigital Library
- M. J. Franklin, D. Kossmann, T. Kraska, S. Ramesh, and R. Xin. Crowddb: answering queries with crowdsourcing. In SIGMOD '11, pages 61--72. Google ScholarDigital Library
- M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. 1979. Google ScholarDigital Library
- P. Gilbert, L. P. Cox, J. Jung, and D. Wetherall. Toward trustworthy mobile sensing. In HotMobile '10, pages 31--36. Google ScholarDigital Library
- B. Hecht and D. Gergle. On the "localness" of user-generated content. In CSCW'10, pages 229--232. Google ScholarDigital Library
- B. Hull, V. Bychkovsky, Y. Zhang, K. Chen, M. Goraczko, A. Miu, E. Shih, H. Balakrishnan, and S. Madden. Cartel: a distributed mobile sensor computing system. In SenSys'06, pages 125--138. Google ScholarDigital Library
- C. A. J. Hurkens and A. Schrijver. On the size of systems of sets every t of which have an sdr, with an application to the worst-case ratio of heuristics for packing problems. SIAM J. Discret. Math., 2(1):68--72, 1989. Google ScholarDigital Library
- P. G. Ipeirotis, F. Provost, and J. Wang. Quality management on amazon mechanical turk. In HCOMP '10, pages 64--67. Google ScholarDigital Library
- L. Kazemi and C. Shahabi. Geocrowd: Enabling query answering with spatial crowdsourcing. In ACM SIGSPATIAL GIS'12. Google ScholarDigital Library
- T. Lappas, K. Liu, and E. Terzi. Finding a team of experts in social networks. In KDD '09, pages 467--476. Google ScholarDigital Library
- A. Marcus, E. Wu, S. Madden, and R. C. Miller. Crowdsourced databases: Query processing with people. In CIDR'11, pages 211--214.Google Scholar
- P. Mohan, V. N. Padmanabhan, and R. Ramjee. Nericell: rich monitoring of road and traffic conditions using mobile smartphones. In SenSys'08, pages 323--336. Google ScholarDigital Library
- V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning from crowds. Journal of Machine Learning Research, 11:1297--1322, 2010. Google ScholarDigital Library
- R. Snow, B. O'Connor, D. Jurafsky, and A. Y. Ng. Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks. In EMNLP '08, pages 254--263. Google ScholarDigital Library
- J. Surowiecki. The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations. 2004. Google ScholarDigital Library
- T. Yan, V. Kumar, and D. Ganesan. Crowdsearch: exploiting crowds for accurate real-time image search on mobile phones. In MobiSys '10, pages 77--90. Google ScholarDigital Library
Index Terms
- GeoTruCrowd: trustworthy query answering with spatial crowdsourcing
Recommendations
Maximizing the number of worker's self-selected tasks in spatial crowdsourcing
SIGSPATIAL'13: Proceedings of the 21st ACM SIGSPATIAL International Conference on Advances in Geographic Information SystemsWith the progress of mobile devices and wireless broadband, a new eMarket platform, termed spatial crowdsourcing is emerging, which enables workers (aka crowd) to perform a set of spatial tasks (i.e., tasks related to a geographical location and time) ...
A Server-Assigned Spatial Crowdsourcing Framework
Inaugural IssueWith the popularity of mobile devices, spatial crowdsourcing is rising as a new framework that enables human workers to solve tasks in the physical world. With spatial crowdsourcing, the goal is to crowdsource a set of spatiotemporal tasks (i.e., tasks ...
GeoCrowd: enabling query answering with spatial crowdsourcing
SIGSPATIAL '12: Proceedings of the 20th International Conference on Advances in Geographic Information SystemsWith the ubiquity of mobile devices, spatial crowdsourcing is emerging as a new platform, enabling spatial tasks (i.e., tasks related to a location) assigned to and performed by human workers. In this paper, for the first time we introduce a taxonomy ...
Comments