skip to main content
10.1145/3035918.3054776acmconferencesArticle/Chapter ViewAbstractPublication PagesmodConference Proceedingsconference-collections
research-article

Crowdsourced Data Management: Overview and Challenges

Published: 09 May 2017 Publication History

Abstract

Many important data management and analytics tasks cannot be completely addressed by automated processes. Crowdsourcing is an effective way to harness human cognitive abilities to process these computer-hard tasks, such as entity resolution, sentiment analysis, and image recognition. Crowdsourced data management has been extensively studied in research and industry recently. In this tutorial, we will survey and synthesize a wide spectrum of existing studies on crowdsourced data management. We first give an overview of crowdsourcing, and then summarize the fundamental techniques, including quality control, cost control, and latency control, which must be considered in crowdsourced data management. Next we review crowdsourced operators, including selection, collection, join, top-k, sort, categorize, aggregation, skyline, planning, schema matching, mining and spatial crowdsourcing. We also discuss crowdsourcing optimization techniques and systems. Finally, we provide the emerging challenges.

References

[1]
https://www.mturk.com/.
[2]
http://www.crowdflower.com.
[3]
https://docs.aws.amazon.com/AWSMechTurk/latest/RequesterUI/amt-ui.pdf.
[4]
http://www.chinacrowds.com.
[5]
http://dbgroup.cs.tsinghua.edu.cn/ligl/crowddata.
[6]
https://www.waze.com.
[7]
S. Amer-Yahia and S. B. Roy. Human factors in crowdsourcing. Proceedings of the VLDB Endowment, 9(13):1615--1618, 2016.
[8]
Y. Amsterdamer, S. Davidson, A. Kukliansky, T. Milo, S. Novgorodov, and A. Somech. Managing general and individual knowledge in crowd mining applications. In CIDR, 2015.
[9]
Y. Amsterdamer, S. B. Davidson, T. Milo, S. Novgorodov, and A. Somech. Oassis: query driven crowd mining. In SIGMOD, pages 589--600. ACM, 2014.
[10]
Y. Amsterdamer, S. B. Davidson, T. Milo, S. Novgorodov, and A. Somech. Ontology assisted crowd mining. PVLDB, 7(13):1597--1600, 2014.
[11]
Y. Amsterdamer, Y. Grossman, T. Milo, and P. Senellart. Crowd mining. In SIGMOD, pages 241--252. ACM, 2013.
[12]
Y. Amsterdamer, Y. Grossman, T. Milo, and P. Senellart. Crowdminer: Mining association rules from the crowd. PVLDB, 6(12):1250--1253, 2013.
[13]
A.P.Dawid and A.M.Skene. Maximum likelihood estimation of observer error-rates using em algorithm. Appl.Statist., 28(1):20--28, 1979.
[14]
B. I. Aydin, Y. S. Yilmaz, Y. Li, Q. Li, J. Gao, and M. Demirbas. Crowdsourcing for multiple-choice question answering. In AAAI, pages 2946--2953, 2014.
[15]
R. Boim, O. Greenshpan, T. Milo, S. Novgorodov, N. Polyzotis, and W. C. Tan. Asking the right questions in crowd data sourcing. In ICDE, 2012.
[16]
C. C. Cao, J. She, Y. Tong, and L. Chen. Whom to ask? jury selection for decision making tasks on micro-blog services. PVLDB, 5(11):1495--1506, 2012.
[17]
C. Chai, G. Li, J. Li, D. Deng, and J. Feng. Cost-effective crowdsourced entity resolution: A partial-order approach. In SIGMOD, pages 969--984, 2016.
[18]
L. Chen, D. Lee, and T. Milo. Data-driven crowdsourcing: Management, mining, and applications. In ICDE, pages 1527--1529. IEEE, 2015.
[19]
X. Chen, P. N. Bennett, K. Collins-Thompson, and E. Horvitz. Pairwise ranking aggregation in a crowdsourced setting. In WSDM, pages 193--202, 2013.
[20]
Z. Chen, R. Fu, Z. Zhao, Z. Liu, L. Xia, L. Chen, P. Cheng, C. C. Cao, Y. Tong, and C. J. Zhang. gmission: a general spatial crowdsourcing platform. PVLDB, 7(13):1629--1632, 2014.
[21]
S. B. Davidson, S. Khanna, T. Milo, and S. Roy. Using the crowd for top-k and group-by queries. In ICDT, pages 225--236, 2013.
[22]
G. Demartini, D. E. Difallah, and P. Cudré-Mauroux. Zencrowd: leveraging probabilistic reasoning and crowdsourcing techniques for large-scale entity linking. In WWW, pages 469--478, 2012.
[23]
A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the em algorithm. J.R.Statist.Soc.B, 30(1):1--38, 1977.
[24]
A. Doan, M. J. Franklin, D. Kossmann, and T. Kraska. Crowdsourcing applications and platforms: A data management perspective. Proceedings of the VLDB Endowment, 4(12):1508--1509, 2011.
[25]
C. B. Eiben, J. B. Siegel, J. B. Bale, S. Cooper, F. Khatib, B. W. Shen, F. Players, B. L. Stoddard, Z. Popovic, and D. Baker. Increased diels-alderase activity through backbone remodeling guided by foldit players. Nature biotechnology, 30(2):190--192, 2012.
[26]
B. Eriksson. Learning to top-k search using pairwise comparisons. In AISTATS, pages 265--273, 2013.
[27]
J. Fan, G. Li, B. C. Ooi, K. Tan, and J. Feng. icrowd: An adaptive crowdsourcing framework. In SIGMOD, pages 1015--1030, 2015.
[28]
J. Fan, M. Lu, B. C. Ooi, W.-C. Tan, and M. Zhang. A hybrid machine-crowdsourcing system for matching web tables. In ICDE, pages 976--987, 2014.
[29]
J. Fan, Z. Wei, D. Zhang, J. Yang, and X. Du. Distribution-aware crowdsourced entity collection. IEEE Trans. Knowl. Data Eng., 2017.
[30]
J. Fan, M. Zhang, S. Kok, M. Lu, and B. C. Ooi. Crowdop: Query optimization for declarative crowdsourcing systems. IEEE Trans. Knowl. Data Eng., 27(8):2078--2092, 2015.
[31]
S. Faradani, B. Hartmann, and P. G. Ipeirotis. What's the right price? pricing tasks for finishing on time. In AAAI Workshop, 2011.
[32]
J. Feng, G. Li, H. Wang, and J. Feng. Incremental quality inference in crowdsourcing. In Database Systems for Advanced Applications - 19th International Conference, DASFAA 2014, Bali, Indonesia, April 21--24, 2014. Proceedings, Part II, pages 453--467, 2014.
[33]
M. J. Franklin, D. Kossmann, T. Kraska, S. Ramesh, and R. Xin. Crowddb: answering queries with crowdsourcing. In SIGMOD, pages 61--72, 2011.
[34]
J. Gao, Q. Li, B. Zhao, W. Fan, and J. Han. Truth discovery and crowdsourcing aggregation: A unified perspective. Proceedings of the VLDB Endowment, 8(12):2048--2049, 2015.
[35]
Y. Gao and A. G. Parameswaran. Finish them!: Pricing algorithms for human computation. PVLDB, 7(14):1965--1976, 2014.
[36]
C. Gokhale, S. Das, A. Doan, J. F. Naughton, N. Rampalli, J. W. Shavlik, and X. Zhu. Corleone: hands-off crowdsourcing for entity matching. In SIGMOD, pages 601--612, 2014.
[37]
B. Groz and T. Milo. Skyline queries with noisy comparisons. In PODS, pages 185--198, 2015.
[38]
A. Gruenheid, D. Kossmann, S. Ramesh, and F. Widmer. Crowdsourcing entity resolution: When is A=B? Technical report, ETH Zürich.
[39]
S. Guo, A. G. Parameswaran, and H. Garcia-Molina. So who won?: dynamic max discovery with the crowd. In SIGMOD, pages 385--396, 2012.
[40]
D. Haas, J. Ansel, L. Gu, and A. Marcus. Argonaut: Macrotask crowdsourcing for complex data processing. PVLDB, 8(12):1642--1653, 2015.
[41]
D. Haas, J. Wang, E. Wu, and M. J. Franklin. Clamshell: Speeding up crowds for low-latency data labeling. PVLDB, 9(4):372--383, 2015.
[42]
H. Heikinheimo and A. Ukkonen. The crowd-median algorithm. In HCOMP, 2013.
[43]
C.-J. Ho, S. Jabbari, and J. W. Vaughan. Adaptive task assignment for crowdsourced classification. In ICML, pages 534--542, 2013.
[44]
C.-J. Ho and J. W. Vaughan. Online task assignment in crowdsourcing markets. In AAAI, 2012.
[45]
H. Hu, G. Li, Z. Bao, and J. Feng. Crowdsourcing-based real-time urban traffic speed estimation: From speed to trend. In ICDE, 2016.
[46]
H. Hu, Y. Zheng, Z. Bao, G. Li, and J. Feng. Crowdsourced poi labelling: Location-aware result inference and task assignment. In ICDE, 2016.
[47]
P. Ipeirotis, F. Provost, and J. Wang. Quality management on amazon mechanical turk. In SIGKDD Workshop, pages 64--67, 2010.
[48]
S. R. Jeffery, M. J. Franklin, and A. Y. Halevy. Pay-as-you-go user feedback for dataspace systems. In SIGMOD, pages 847--860, 2008.
[49]
M. Joglekar, H. Garcia-Molina, and A. G. Parameswaran. Evaluating the crowd with confidence. In SIGKDD, pages 686--694, 2013.
[50]
M. Joglekar, H. Garcia-Molina, and A. G. Parameswaran. Comprehensive and reliable crowd assessment algorithms. In ICDE, pages 195--206, 2015.
[51]
H. Kaplan, I. Lotosh, T. Milo, and S. Novgorodov. Answering planning queries with the crowd. PVLDB, 6(9):697--708, 2013.
[52]
D. R. Karger, S. Oh, and D. Shah. Iterative learning for reliable crowdsourcing systems. In NIPS, pages 1953--1961, 2011.
[53]
L. Kazemi, C. Shahabi, and L. Chen. Geotrucrowd: trustworthy query answering with spatial crowdsourcing. In SIGSPATIAL, pages 304--313, 2013.
[54]
A. R. Khan and H. Garcia-Molina. Hybrid strategies for finding the max with the crowd. Technical report, 2014.
[55]
H.-C. Kim and Z. Ghahramani. Bayesian classifier combination. In AISTATS, pages 619--627, 2012.
[56]
L. I. Kuncheva, C. J. Whitaker, and C. A. Shipp. Limits on the majority vote accuracy in classifier fusion. Pattern Anal. Appl., 6(1):22--31, 2003.
[57]
G. Li, J. Wang, Y. Zheng, and M. J. Franklin. Crowdsourced data management: A survey. TKDE, 28(9):2296--2319, 2016.
[58]
Q. Li, Y. Li, J. Gao, L. Su, B. Zhao, M. Demirbas, W. Fan, and J. Han. A confidence-aware approach for truth discovery on long-tail data. PVLDB, 8(4):425--436, 2014.
[59]
Q. Li, Y. Li, J. Gao, B. Zhao, W. Fan, and J. Han. Resolving conflicts in heterogeneous data by truth discovery and source reliability estimation. In SIGMOD, pages 1187--1198, 2014.
[60]
Q. Liu, J. Peng, and A. T. Ihler. Variational inference for crowdsourcing. In NIPS, pages 701--709, 2012.
[61]
X. Liu, M. Lu, B. C. Ooi, Y. Shen, S. Wu, and M. Zhang. CDAS: A crowdsourcing data analytics system. PVLDB, 5(10):1040--1051, 2012.
[62]
C. Lofi, K. E. Maarry, and W. Balke. Skyline queries in crowd-enabled databases. In EDBT, pages 465--476, 2013.
[63]
C. Lofi, K. E. Maarry, and W. Balke. Skyline queries over incomplete data - error models for focused crowd-sourcing. In ER, pages 298--312, 2013.
[64]
I. Lotosh, T. Milo, and S. Novgorodov. Crowdplanr: Planning made easy with crowd. In ICDE, pages 1344--1347. IEEE, 2013.
[65]
F. Ma, Y. Li, Q. Li, M. Qiu, J. Gao, S. Zhi, L. Su, B. Zhao, H. Ji, and J. Han. Faitcrowd: Fine grained truth discovery for crowdsourced data aggregation. In KDD, pages 745--754. ACM, 2015.
[66]
A. Marcus, D. R. Karger, S. Madden, R. Miller, and S. Oh. Counting with the crowd. PVLDB, 6(2):109--120, 2012.
[67]
A. Marcus and A. Parameswaran. Crowdsourced data management industry and academic perspectives. Foundations and Trends in Databases, 6(1--2):1--161, 2015.
[68]
A. Marcus, E. Wu, S. Madden, and R. C. Miller. Crowdsourced databases: Query processing with people. In CIDR, pages 211--214, 2011.
[69]
B. Mozafari, P. Sarkar, M. Franklin, M. Jordan, and S. Madden. Scaling up crowd-sourcing to very large datasets: a case for active learning. PVLDB, 8(2):125--136, 2014.
[70]
Q. V. H. Nguyen, T. T. Nguyen, Z. Miklós, K. Aberer, A. Gal, and M. Weidlich. Pay-as-you-go reconciliation in schema matching networks. In ICDE, pages 220--231. IEEE, 2014.
[71]
W. R. Ouyang, L. M. Kaplan, P. Martin, A. Toniolo, M. B. Srivastava, and T. J. Norman. Debiasing crowdsourced quantitative characteristics in local businesses and services. In IPSN, pages 190--201, 2015.
[72]
A. G. Parameswaran, S. Boyd, H. Garcia-Molina, A. Gupta, N. Polyzotis, and J. Widom. Optimal crowd-powered rating and filtering algorithms. PVLDB, 7(9):685--696, 2014.
[73]
A. G. Parameswaran, H. Garcia-Molina, H. Park, N. Polyzotis, A. Ramesh, and J. Widom. Crowdscreen: algorithms for filtering data with humans. In SIGMOD, pages 361--372, 2012.
[74]
A. G. Parameswaran, H. Park, H. Garcia-Molina, N. Polyzotis, and J. Widom. Deco: declarative crowdsourcing. In CIKM, pages 1203--1212. ACM, 2012.
[75]
A. G. Parameswaran, A. D. Sarma, H. Garcia-Molina, N. Polyzotis, and J. Widom. Human-assisted graph search: it's okay to ask questions. PVLDB, 4(5):267--278, 2011.
[76]
H. Park and J. Widom. Crowdfill: collecting structured data from the crowd. In SIGMOD, pages 577--588, 2014.
[77]
T. Pfeiffer, X. A. Gao, Y. Chen, A. Mao, and D. G. Rand. Adaptive polling for information aggregation. In AAAI, 2012.
[78]
V. C. Raykar and S. Yu. Eliminating spammers and ranking annotators for crowdsourced labeling tasks. Journal of Machine Learning Research, 13:491--518, 2012.
[79]
V. C. Raykar, S. Yu, L. H. Zhao, A. K. Jerebko, C. Florin, G. H. Valadez, L. Bogoni, and L. Moy. Supervised learning from multiple experts: whom to trust when everyone lies a bit. In ICML, pages 889--896, 2009.
[80]
V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning from crowds. JMLR, 11(Apr):1297--1322, 2010.
[81]
A. D. Sarma, A. G. Parameswaran, H. Garcia-Molina, and A. Y. Halevy. Crowd-powered find algorithms. In ICDE, pages 964--975, 2014.
[82]
P. Smyth, U. M. Fayyad, M. C. Burl, P. Perona, and P. Baldi. Inferring ground truth from subjective labelling of venus images. In NIPS, pages 1085--1092, 1994.
[83]
H. Su, K. Zheng, J. Huang, H. Jeung, L. Chen, and X. Zhou. Crowdplanner: A crowd-based route recommendation system. In ICDE, pages 1144--1155. IEEE, 2014.
[84]
H. Su, K. Zheng, J. Huang, T. Liu, H. Wang, and X. Zhou. A crowd-based route recommendation system-crowdplanner. In ICDE, pages 1178--1181, 2014.
[85]
H. To, G. Ghinita, and C. Shahabi. A framework for protecting worker location privacy in spatial crowdsourcing. PVLDB, 7(10):919--930, 2014.
[86]
B. Trushkowsky, T. Kraska, M. J. Franklin, and P. Sarkar. Crowdsourced enumeration queries. In ICDE, pages 673--684, 2013.
[87]
M. Venanzi, J. Guiver, G. Kazai, P. Kohli, and M. Shokouhi. Community-based bayesian aggregation models for crowdsourcing. In WWW, pages 155--164, 2014.
[88]
P. Venetis, H. Garcia-Molina, K. Huang, and N. Polyzotis. Max algorithms in crowdsourcing environments. In WWW, pages 989--998, 2012.
[89]
V. Verroios and H. Garcia-Molina. Entity resolution with crowd errors. In ICDE, pages 219--230, 2015.
[90]
V. Verroios, P. Lofgren, and H. Garcia-Molina. tdp: An optimal-latency budget allocation strategy for crowdsourced MAXIMUM operations. In SIGMOD, pages 1047--1062, 2015.
[91]
N. Vesdapunt, K. Bellare, and N. N. Dalvi. Crowdsourcing algorithms for entity resolution. PVLDB, 7(12):1071--1082, 2014.
[92]
L. Von Ahn, B. Maurer, C. McMillen, D. Abraham, and M. Blum. recaptcha: Human-based character recognition via web security measures. Science, 321(5895):1465--1468, 2008.
[93]
J. Wang, T. Kraska, M. J. Franklin, and J. Feng. CrowdER: crowdsourcing entity resolution. PVLDB, 5(11):1483--1494, 2012.
[94]
J. Wang, S. Krishnan, M. J. Franklin, K. Goldberg, T. Kraska, and T. Milo. A sample-and-clean framework for fast and accurate query processing on dirty data. In SIGMOD, pages 469--480, 2014.
[95]
J. Wang, G. Li, T. Kraska, M. J. Franklin, and J. Feng. Leveraging transitive relations for crowdsourced joins. In SIGMOD, 2013.
[96]
S. Wang, X. Xiao, and C. Lee. Crowd-based deduplication: An adaptive approach. In SIGMOD, pages 1263--1277, 2015.
[97]
P. Welinder, S. Branson, P. Perona, and S. J. Belongie. The multidimensional wisdom of crowds. In NIPS, pages 2424--2432, 2010.
[98]
P. Welinder and P. Perona. Online crowdsourcing: rating annotators and obtaining cost-effective labels. In CVPR Workshop (ACVHL), pages 25--32. IEEE, 2010.
[99]
S. E. Whang, P. Lofgren, and H. Garcia-Molina. Question selection for crowd entity resolution. PVLDB, 6(6):349--360, 2013.
[100]
J. Whitehill, P. Ruvolo, T. Wu, J. Bergsma, and J. R. Movellan. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In NIPS, pages 2035--2043, 2009.
[101]
J. Whitehill, T.-f. Wu, J. Bergsma, J. R. Movellan, and P. L. Ruvolo. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In NIPS, pages 2035--2043, 2009.
[102]
S. Wu, X. Wang, S. Wang, Z. Zhang, and A. K. H. Tung. K-anonymity for crowdsourcing database. TKDE, 26(9):2207--2221, 2014.
[103]
T. Yan, V. Kumar, and D. Ganesan. Crowdsearch: exploiting crowds for accurate real-time image search on mobile phones. In MobiSys, pages 77--90, 2010.
[104]
P. Ye, U. EDU, and D. Doermann. Combining preference and absolute judgements in a crowd-sourced setting. In ICML Workshop, 2013.
[105]
C. J. Zhang, L. Chen, H. V. Jagadish, and C. C. Cao. Reducing uncertainty of schema matching via crowdsourcing. PVLDB, 6(9):757--768, 2013.
[106]
C. J. Zhang, Y. Tong, and L. Chen. Where to: Crowd-aided path selection. PVLDB, 7(14):2005--2016, 2014.
[107]
X. Zhang, G. Li, and J. Feng. Crowdsourced top-k algorithms: An experimental evaluation. PVLDB, 9(4):372--383, 2015.
[108]
Z. Zhao, F. Wei, M. Zhou, W. Chen, and W. Ng. Crowd-selection query processing in crowdsourcing databases: A task-driven approach. In EDBT, pages 397--408, 2015.
[109]
Z. Zhao, D. Yan, W. Ng, and S. Gao. A transfer learning based framework of crowd-selection on twitter. In SIGKDD, pages 1514--1517, 2013.
[110]
Y. Zheng, R. Cheng, S. Maniu, and L. Mo. On optimality of jury selection in crowdsourcing. In EDBT, pages 193--204, 2015.
[111]
Y. Zheng, G. Li, and R. Cheng. Docs: Domain-aware crowdsourcing system. PVLDB, 10(4):361--372, 2016.
[112]
Y. Zheng, G. Li, Y. Li, C. Shan, and R. Cheng. Truth inference in crowdsourcing: Is the problem solved? PVLDB, 10(5):541--552, 2017.
[113]
Y. Zheng, J. Wang, G. Li, R. Cheng, and J. Feng. QASCA: A quality-aware task assignment system for crowdsourcing applications. In SIGMOD, pages 1031--1046, 2015.
[114]
D. Zhou, S. Basu, Y. Mao, and J. C. Platt. Learning from the wisdom of crowds by minimax entropy. In NIPS, pages 2195--2203, 2012.

Cited By

View all
  • (2024)Crowdsourcing Geospatial Data for Earth and Human Observations: A ReviewJournal of Remote Sensing10.34133/remotesensing.01054Online publication date: 22-Jan-2024
  • (2024)The Geospatial Crowd: Emerging Trends and Challenges in Crowdsourced Spatial AnalyticsISPRS International Journal of Geo-Information10.3390/ijgi1306016813:6(168)Online publication date: 21-May-2024
  • (2024)Demystifying Data Management for Large Language ModelsCompanion of the 2024 International Conference on Management of Data10.1145/3626246.3654683(547-555)Online publication date: 9-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SIGMOD '17: Proceedings of the 2017 ACM International Conference on Management of Data
May 2017
1810 pages
ISBN:9781450341974
DOI:10.1145/3035918
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 May 2017

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. crowdsourcing
  2. data management
  3. optimization

Qualifiers

  • Research-article

Funding Sources

  • NSERC Discovery Grant
  • RGC Projects
  • NSF of China
  • 973 of China

Conference

SIGMOD/PODS'17
Sponsor:

Acceptance Rates

Overall Acceptance Rate 785 of 4,003 submissions, 20%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)105
  • Downloads (Last 6 weeks)9
Reflects downloads up to 19 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Crowdsourcing Geospatial Data for Earth and Human Observations: A ReviewJournal of Remote Sensing10.34133/remotesensing.01054Online publication date: 22-Jan-2024
  • (2024)The Geospatial Crowd: Emerging Trends and Challenges in Crowdsourced Spatial AnalyticsISPRS International Journal of Geo-Information10.3390/ijgi1306016813:6(168)Online publication date: 21-May-2024
  • (2024)Demystifying Data Management for Large Language ModelsCompanion of the 2024 International Conference on Management of Data10.1145/3626246.3654683(547-555)Online publication date: 9-Jun-2024
  • (2024)Privacy-Preserving Competitive Detour Tasking in Spatial CrowdsourcingIEEE Transactions on Services Computing10.1109/TSC.2024.3511992(1-14)Online publication date: 2024
  • (2024)Task Assignment With Efficient Federated Preference Learning in Spatial CrowdsourcingIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2023.331181636:4(1800-1814)Online publication date: Apr-2024
  • (2024)Toward Fine-Grained Task Allocation With Bilateral Access Control for Intelligent Transportation SystemsIEEE Internet of Things Journal10.1109/JIOT.2023.334457711:8(14814-14828)Online publication date: 15-Apr-2024
  • (2024)Semi-Asynchronous Online Federated Crowdsourcing2024 IEEE 40th International Conference on Data Engineering (ICDE)10.1109/ICDE60146.2024.00319(4180-4193)Online publication date: 13-May-2024
  • (2023)Implementation of Digital Geotwin-Based Mobile Crowdsensing to Support Monitoring System in Smart CitySustainability10.3390/su1505394215:5(3942)Online publication date: 21-Feb-2023
  • (2023)Federated Few-Shot Learning for Mobile NLPProceedings of the 29th Annual International Conference on Mobile Computing and Networking10.1145/3570361.3613277(1-17)Online publication date: 2-Oct-2023
  • (2023)On Dynamically Pricing Crowdsourcing TasksACM Transactions on Knowledge Discovery from Data10.1145/354401817:2(1-27)Online publication date: 20-Feb-2023
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media