In-class coding-based summative assessments: tools, challenges, and experience

2018 
Pencil-and-paper coding questions on computer science exams are unrealistic: real developers work at a keyboard with extensive resources at hand, rather than on paper with few or no notes. We address the challenge of administering a proctored exam in which students must write code that passes instructor-provided test cases as well as writing test cases of their own. The exam environment allows students broad access to Internet resources they would use for take-home programming assignments, while blocking their ability to use that facility for direct communication with colluders. Our system supports cumulative questions (in which later parts depend on correctly answering earlier parts) by allowing the test-taker to reveal one or more hints by sacrificing partial credit for the question. Autograders built into the exam environment provide immediate feedback to the student on their exam grade. In case of grade disputes, a virtual machine image reflecting all of the student's work is preserved for later inspection. While elements of our scheme have appeared in the literature (autograding, semi-locked-down computer environments for exam-taking, hint "purchasing"), we believe we are the first to combine them into a system that enables realistic in-class coding-based exams with broad Internet access. We report on lessons and experience creating and administering such an exam, including autograding-related pitfalls for high-stakes exams, and invite others to use and improve on our tools and methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    5
    Citations
    NaN
    KQI
    []