Performance Variations of the Bayesian Model of Peer-Assessment Implemented in OpenAnswer Response to Modifications of the Number of Peers Assessed and of the Quality of the Class

Abstract

The paper presents a study of the performance variations of the Bayesian model of peer-assessment implemented in OpenAnswer, in terms of the grades prediction accuracy. OpenAnswer (OA) models a peer assessment session as a Bayesian network. For each student, a sub-network contains variables describing relevant aspects of both the individual cognitive state and the state of the current assessment session. Sub-networks are interconnected to each other to obtain the final one. Evidence propagated through the global network is represented by all the grades given by students to their peers, together with a subset of the teacher’s corrections. Among the possible affecting factors, the paper reports about the investigation of the dependence of grades prediction performance on the quality of the class, i.e., the average level of proficiency of its students, and on the number of peers assessed by each student. The results show that both factors affect the accuracy of the inferred marks produced by the Bayesian network, when compared with the available ground-truth produced by teachers.

Publication
2017 16th International Conference on Information Technology Based Higher Education and Training (ITHET)
Luca Moschella
Luca Moschella
Ph.D. student in Computer Science

I’m excited by many Computer Science fields and, in particular, by Artificial Intelligence.

Related