skip to main content
10.1145/2525314.2525346acmconferencesArticle/Chapter ViewAbstractPublication PagesgisConference Proceedingsconference-collections
research-article

GeoTruCrowd: trustworthy query answering with spatial crowdsourcing

Published:05 November 2013Publication History

ABSTRACT

With the abundance and ubiquity of mobile devices, a new class of applications, called spatial crowdsourcing, is emerging, which enables spatial tasks (i.e., tasks related to a location) assigned to and performed by human workers. However, one of the major challenges with spatial crowdsourcing is how to verify the validity of the results provided by workers, when the workers are not trusted equally. To tackle this problem, we assume every worker has a reputation score, which states the probability that the worker performs a task correctly. Moreover, we define a confidence level for every spatial task, which states that the answer to the given spatial task is only accepted if its confidence is higher than a certain threshold. Thus, the problem we are trying to solve is to maximize the number of spatial tasks that are assigned to a set of workers while satisfying the confidence levels of those tasks. Note that a unique aspect of our problem is that the optimal assignment of tasks heavily depends on the geographical locations of workers and tasks. This means that every spatial task should be assigned to enough number of workers such that their aggregate reputation satisfies the confidence of the task. Consequently, an exhaustive approach needs to compute the aggregate reputation score (using a typical decision fusion aggregation mechanism, such as voting) for all possible subsets of the workers, which renders the problem complex (we show it is NP-hard). Subsequently, we propose a number of heuristics and utilizing real-world and synthetic data in extensive sets of experiments we show that we can achieve close to optimal performance with the cost of a greedy approach, by exploiting our problem's unique characteristics.

References

  1. Amazon mechanical turk. www.mturk.com.Google ScholarGoogle Scholar
  2. Crowdflower. www.crowdflower.com.Google ScholarGoogle Scholar
  3. Gowalla. www.wikipedia.org/wiki/Gowalla.Google ScholarGoogle Scholar
  4. University of california berkeley, 2008-2009. traffic.berkeley.edu/.Google ScholarGoogle Scholar
  5. O. Alonso, D. E. Rose, and B. Stewart. Crowdsourcing for relevance evaluation. SIGIR Forum'08, 42(2):9--15. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. F. Alt, A. S. Shirazi, A. Schmidt, U. Kramer, and Z. Nawaz. Location-based crowdsourcing: extending crowdsourcing to the real world. In NordiCHI '10, pages 13--22. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. M. Bulut, Y. Yilmaz, and M. Demirbas. Crowdsourcing location-based queries. In PERCOM Workshops'11, pages 513--518.Google ScholarGoogle Scholar
  8. C. S. Campbell, P. P. Maglio, A. Cozzi, and B. Dom. Expertise identification using email communications. In CIKM '03, pages 528--531. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. C. C. Cao, J. She, Y. Tong, and L. C. 0002. Whom to ask? jury selection for decision making tasks on micro-blog services. PVLDB'12, 5(11):1495--1506. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. A. Doan, R. Ramakrishnan, and A. Y. Halevy. Crowdsourcing systems on the world-wide web. Commun. ACM'11, 54(4):86--96. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. D. Du, K. Ko, and X. Hu. Design and Analysis of Approximation Algorithms. Springer Optimization and Its Applications. 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. A. Dua, N. Bulusu, W.-C. Feng, and W. Hu. Towards trustworthy participatory sensing. In HotSec'09, pages 8--8. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. M. J. Franklin, D. Kossmann, T. Kraska, S. Ramesh, and R. Xin. Crowddb: answering queries with crowdsourcing. In SIGMOD '11, pages 61--72. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. 1979. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. P. Gilbert, L. P. Cox, J. Jung, and D. Wetherall. Toward trustworthy mobile sensing. In HotMobile '10, pages 31--36. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. B. Hecht and D. Gergle. On the "localness" of user-generated content. In CSCW'10, pages 229--232. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. B. Hull, V. Bychkovsky, Y. Zhang, K. Chen, M. Goraczko, A. Miu, E. Shih, H. Balakrishnan, and S. Madden. Cartel: a distributed mobile sensor computing system. In SenSys'06, pages 125--138. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. C. A. J. Hurkens and A. Schrijver. On the size of systems of sets every t of which have an sdr, with an application to the worst-case ratio of heuristics for packing problems. SIAM J. Discret. Math., 2(1):68--72, 1989. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. P. G. Ipeirotis, F. Provost, and J. Wang. Quality management on amazon mechanical turk. In HCOMP '10, pages 64--67. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. L. Kazemi and C. Shahabi. Geocrowd: Enabling query answering with spatial crowdsourcing. In ACM SIGSPATIAL GIS'12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. T. Lappas, K. Liu, and E. Terzi. Finding a team of experts in social networks. In KDD '09, pages 467--476. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. A. Marcus, E. Wu, S. Madden, and R. C. Miller. Crowdsourced databases: Query processing with people. In CIDR'11, pages 211--214.Google ScholarGoogle Scholar
  23. P. Mohan, V. N. Padmanabhan, and R. Ramjee. Nericell: rich monitoring of road and traffic conditions using mobile smartphones. In SenSys'08, pages 323--336. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning from crowds. Journal of Machine Learning Research, 11:1297--1322, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. R. Snow, B. O'Connor, D. Jurafsky, and A. Y. Ng. Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks. In EMNLP '08, pages 254--263. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. J. Surowiecki. The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations. 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. T. Yan, V. Kumar, and D. Ganesan. Crowdsearch: exploiting crowds for accurate real-time image search on mobile phones. In MobiSys '10, pages 77--90. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. GeoTruCrowd: trustworthy query answering with spatial crowdsourcing

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        SIGSPATIAL'13: Proceedings of the 21st ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems
        November 2013
        598 pages
        ISBN:9781450325219
        DOI:10.1145/2525314

        Copyright © 2013 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 5 November 2013

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate220of1,116submissions,20%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader