Leveraging Cognitive Diagnosis to Improve Peer Assessment in MOOCs

2021 
A major challenge faced by popular massive open online courses (MOOCs) is the assessment of large-scale open-ended assignments submitted by students. Recently, peer assessment has become a mainstream paradigm that helps grade open-ended assignments on a large scale. In peer assessment, students also become graders in grading a small number of their peers’ assignments, and the peer grades are then aggregated to predict a true score for each assignment. The collected peer grades are usually inaccurate because graders have different reliabilities and biases. To improve accuracy, several probabilistic graph models have been proposed to model the reliability and bias of each grader. However, none of these models consider graders’ competency information in the assignments to be graded, which has been found to be very effective. We propose two new probabilistic graph models to improve the accuracy of cardinal peer assessments based on the well-accepted cognitive diagnosis technique. Specifically, the cognitive diagnosis model DINA is applied to determine grader competency based on historical tests or assignments. Then, this information is used to optimize the modeling of grader reliability in each of the proposed models. Moreover, an effective model inference algorithm is proposed to infer true scores of assignments. Experimental results based on real world datasets show that the two proposed models outperform state-of-the-art models and that consideration of grader competency contributes to improved score estimation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    56
    References
    0
    Citations
    NaN
    KQI
    []