Human+AI Crowd Task Assignment Considering Result Quality Requirements

2021 
This paper addresses the problem of dynamically assigning tasks to a crowd consisting of AI and human workers. Currently, crowdsourcing the creation of AI programs is a common practice. To apply such kinds of AI programs to the set of tasks, we often take the ``all-or-nothing'' approach that waits for the AI to be good enough. However, this approach may prevent us from exploiting the answers provided by the AI until the process is completed, and also prevents the exploration of different AI candidates. Therefore, integrating the created AI, both with other AIs and human computation, to obtain a more efficient human-AI team is not trivial. In this paper, we propose a method that addresses these issues by adopting a ``divide-and-conquer'' strategy for AI worker evaluation. Here, the assignment is optimal when the number of task assignments to humans is minimal, as long as the final results satisfy a given quality requirement. This paper presents some theoretical analyses of the proposed method and an extensive set of experiments conducted with open benchmarks and real-world datasets. The results show that the algorithm can assign many more tasks than the baselines to AI when it is difficult for AIs to satisfy the quality requirement for the whole set of tasks. They also show that it can flexibly change the number of tasks assigned to multiple AI workers in accordance with the performance of the available AI workers.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    21
    References
    0
    Citations
    NaN
    KQI
    []