Human-collective visualization transparency

2021 
Interest in collective robotic systems has increased rapidly due to the potential benefits that can be offered to operators, such as increased safety and support, who perform challenging tasks in high-risk environments. The limited human-collective transparency research has focused on how the design of the models (i.e., algorithms), visualizations, and control mechanisms influence human-collective behaviors. Traditional collective visualizations have shown all of the individual entities composing a collective, which may become problematic as collectives scale in size and heterogeneity, and tasks become more demanding. Human operators can become overloaded with information, which will negatively affect their understanding of the collective’s current state and overall behaviors, which can cause poor teaming performance. This manuscript contributes to the human-collective domain by analyzing how visualization transparency influences remote supervision of collectives. The visualization transparency analysis expands traditional transparency assessments by focusing on how operators with different individual capabilities are impacted, their comprehension, the interface usability, and the human-collective team’s performance. Metrics that effectively assess visualization transparency of collectives are identified, and design guidance can inform future real-world human-collective systems designs. The individual agent and abstract screen-based visualizations were analyzed while remotely supervising sequential best-of-n decision-making tasks involving four collectives, composed of 200 entities each, 800 in total. The abstract visualization provided better transparency by enabling operators with different individual differences and capabilities to perform relatively the same and promoted higher human-collective performance.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    65
    References
    0
    Citations
    NaN
    KQI
    []