Skip to content

Latest commit

 

History

History
10 lines (8 loc) · 850 Bytes

README.md

File metadata and controls

10 lines (8 loc) · 850 Bytes

Automated Short Answer Grading Experiments

===========

Code for the paper

Torsten Zesch, Michael Heilman, and Aoife Cahill. Reducing Annotation Efforts in Supervised Short Answer Scoring In Proceedings of 10th Building Educational Applications (BEA) Workshop at NAACL, 2015.

Abstract

Automated short answer scoring is increasingly used to give students timely feedback about their learning progress. Building scoring models comes with high costs, as state-of-the-art methods using supervised learning require large amounts of hand-annotated data. We analyze the potential of recently proposed methods for semi-supervised learning based on clustering. We find that all examined methods (centroids, all clusters, selected pure clusters) are mainly effective for very short answers and do not generalize well to several-sentence responses.