Assessing Annotation Consistency in the Wild

2018 
The process of human annotation of sensor data is at the base of research areas such as participatory sensing and mobile crowdsensing. While much research has been devoted to assessing the quality of sensor data, the same cannot be said about annotations, which are fundamental to obtain a clear understanding of users experience. We present an evaluation of an interdisciplinary annotation methodology allowing users to continuously annotate their everyday life. The evaluation is done on a dataset from a project focused on the behaviour of students and how this impacts on their academic performance. We focus on those annotations concerning locations and movements of students, and we evaluate the annotations quality by checking their consistency. Results show that students are highly consistent with respect to the random baseline, and that these results can be improved by exploiting the semantics of annotations.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    2
    Citations
    NaN
    KQI
    []