Skip to content Skip to navigation

Qualifying Exam: Automatically Providing Personalized Feedback to Programming Assignments

Abstract: 

Autograding systems are increasingly being deployed to meet the challenge of teaching programming at scale. Studies show that providing personalized feedback along with the grade to novice programmers can have tremendous benefits on their learning. However, providing personalized feedback to programming assignments does not scale well. In my research, I will look at ways to improve the trade-off between the quality of the feedback and the manual effort required to provide it to a large number of students. First, we propose a methodology for extending autograders to provide feedback for incorrect programs. Our methodology starts with the instructor identifying the concepts and skills important to each programming assignment, designing the assignment, and designing a comprehensive test suite. Tests are then applied to code submissions to learn classes of common errors and produce classifiers to automatically categorize errors in future submissions. The instructor maps the errors to concepts and skills and writes hints to help students find their misconceptions and mistakes. We have applied the methodology to two assignments from our Introduction to Computer Science course. We first evaluated the automatic error categorization manually, and found an average accuracy higher than 90%. Then, we deployed the hints system during two semesters at Rutgers. Results show that three times more students were able to correct their code so that it passed all the test cases when hints were provided compared to when hint were not provided. However, on an average almost half of the students failed to correct their code so that it passed all the test cases even when hints were provided. The above work shows promising results but (a) still requires a large manual effort by the instructor, and (b) gives fairly coarse-grained hints that do not seem to help some students. Next, I will investigate students intention when writing code as reflected by the usage of the variables in their code in comparison to expected variables roles patterns seen in the class. Identifying students intention in this manner via static analysis will be used to give personalized feedback.

Speaker: 
Georgiana Haldeman
Location: 
CoRE B (305)
Event Date: 
11/30/2018 - 10:30am
Committee: 
Prof. Thu Nguyen (Chair), Prof. Alex Borgida, Prof. Santosh Nagarakatte, Prof. Jingjin Yu
Event Type: 
Qualifying Exam
Organization: 
Dept. Computer Science