Loading [a11y]/accessibility-menu.js
Efficient Evaluation of AI Workers for the Human+AI Crowd Task Assignment | IEEE Conference Publication | IEEE Xplore

Efficient Evaluation of AI Workers for the Human+AI Crowd Task Assignment


Abstract:

Nowadays, it is a common practice for crowd workers to develop ML models that classify data items. We envision the Human+AI crowd where crowd programmers develop "AI work...Show More

Abstract:

Nowadays, it is a common practice for crowd workers to develop ML models that classify data items. We envision the Human+AI crowd where crowd programmers develop "AI workers," which are black-box software agents that work among other human workers. The problem here is evaluating such AI workers is different from evaluating human workers in that they may not be spam workers, although they have low accuracy at the beginning of their learning process or for a particular label. Therefore, existing work evaluates the output from AI workers every time they output the task results. Obviously, such a naive evaluation does not scale because there are a tremendous number of task results to be evaluated. This paper addresses the problem of how to efficiently evaluate AI worker outputs by skipping the AI evaluation when the AI is unlikely to satisfy the expected accuracy. We conducted an experiment to compare two strategies and found that both reduce the number of evaluations by orders of magnitude while keeping the number of task assignments to AI workers.
Date of Conference: 17-20 December 2022
Date Added to IEEE Xplore: 26 January 2023
ISBN Information:
Conference Location: Osaka, Japan

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.