SIGCSE Virtual 2024
Thu 5 - Sun 8 December 2024
Thu 5 Dec 2024 14:15 - 14:30 at Track 1 - Posters 11

Open-ended questions test a more thorough understanding compared to closed-ended questions and are often a preferred assessment method. However, open-ended questions are tedious to grade and subject to personal bias. Therefore, there have been efforts to speed up the grading process through automation. Short Answer Grading (SAG) systems aim to automatically score students’ answers in examinations. Despite growth in SAG methods and capabilities, there exists no comprehensive short-answer grading benchmark across different subjects, grading scales, and distributions. Thus, it is hard to assess the capabilities of current automated grading methods in terms of their generalizability. In this preliminary work, we introduce the combined ASAG2024 benchmark to facilitate the comparison of automated grading systems. Combining seven commonly used short-answer grading datasets in a common structure and grading scale. For our benchmark, we evaluate a set of recent SAG methods, revealing that while LLM-based approaches reach new high scores, they still are far from reaching human performance. This opens up avenues for future research on human-machine SAG systems.

Thu 5 Dec

Displayed time zone: (UTC) Coordinated Universal Time change

14:00 - 14:30
Posters 11Conference at Track 1
14:00
15m
Poster
Integrating Making and Computational Thinking in Early Childhood Education: Preliminary Outcomes from a Teacher Trainer Workshop on Designing an Intervention
Conference
Tobias Bahr University of Stuttgart
14:15
15m
Poster
ASAG2024: A Combined Benchmark for Short Answer Grading
Conference
Gérôme Meyer ZHAW University of Applied Sciences, Philip Breuer ZHAW University of Applied Sciences, Jonathan Fürst ZHAW University of Applied Sciences