[1] |
Jaime Spacco, David Hovemeyer, and William Pugh.
An eclipse-based course project snapshot and submission system.
In 3rd Eclipse Technology Exchange Workshop (eTX), Vancouver,
BC, October 24, 2004. [ bib | .pdf ] Much research has been done on techniques to teach students how to program. However, it is usually difficult to quantify exactly how students work. Instructors typically only see students' work when they submit their projects or come to office hours. Another common problem in introductory programming courses is that student code is only subjected to rigorous testing once it has been submitted. Both of these problems can be viewed as a lack of feedback between students and instructors. |
[2] |
Jaime Spacco, Jaymie Strecker, David Hovemeyer, and William Pugh.
Software repository mining with Marmoset: An automated programming
project snapshot and testing system.
In Proceedings of the Mining Software Repositories Workshop (MSR
2005), St. Louis, Missouri, USA, May 2005. [ bib | .pdf ] Most computer science educators hold strong opinions about the ``right'' approach to teaching introductory level programming. Unfortunately, we have comparatively little hard evidence about the effectiveness of these various approaches because we generally lack the infrastructure to obtain sufficiently detailed data about novices' programming habits. |
[3] |
David Hovemeyer, Jaime Spacco, and Bill Pugh.
Evaluating and tuning a static analysis to find null pointer bugs.
Lisbon, Portugal, September 5-6, 2005. ACM. [ bib | .pdf ] Using static analysis to detect memory access errors, such as null pointer dereferences, is not a new problem. However, much of the previous work has used rather sophisticated analysis techniques in order to detect such errors. |
[4] |
Jaime Spacco, David Hovemeyer, Bill Pugh, Jeff Hollingsworth, Nelson
Padua-Perez, and Fawzi Emad.
Experiences with marmoset.
Technical report, 2006. [ bib | .pdf ] Many project submission and testing systems have been de- veloped. These systems can be beneficial for both students and instructors: students benefit from having automatic feedback on which parts of their projects work correctly and which parts still need work, while instructors benefit from real-time feedback on the progress of individual students and the class as a whole. A limitation of traditional project submission and test- ing systems is that they only track the project versions that students explicitly submit; what students are doing between submissions remains shrouded in mystery. Based on expe- rience, we know that many students who hit a stumbling block resort to unproductive trial-and-error programming. As instructors, we would like to know what these stumbling blocks are. To help understand how students work, we developed Mar- moset, a project submission, testing, and snapshot system. The system has two novel features. First, it employs a token-based incentive system to encourage students to start work early and to think critically about their work. Second, Marmoset can be configured to use CVS to transparently capture a project snapshot every time students save a file. The detailed development history thus captured offers a fine- grained view of each student's progress. In this paper, we describe initial experiences with Mar- moset, from the perspectives of both instructors and stu- dents. We also describe some initial research results from analyzing the student snapshot database. |
[5] |
Jaime Spacco, David Hovemeyer, William Pugh, Jeff Hollingsworth, Nelson
Padua-Perez, and Fawzi Emad.
Experiences with marmoset: Designing and using an advanced submission
and testing system for programming courses.
In ITiCSE '06: Proceedings of the 11th annual conference on
Innovation and technology in computer science education. ACM Press, 2006. [ bib | .pdf ] Two important questions regarding automated submission and testing systems are: What kind of feedback should we give students as they work on their programming assignments, and how can we study in more detail the programming assignment development process of novices? |
[6] |
Jaime Spacco, Titus Winters, and Tom Payne.
Inferring use cases from unit testing.
In AAAI Workshop on Educational Data Mining, New York, NY, USA,
July 2006. ACM Press. [ bib | .pdf ] We present techniques for analyzing score matrices of unit tests outcomes from snapshots of CS2 student code throughout the development cycle. This analysis includes a technique for estimating the number of fundamentally different features in the unit tests, as well as a survey of which algorithms can best match human intuition when grouping tests into related clusters. Unlike previous investigations into topic clustering of score matrices, we successfully identify algorithms that perform with good accuracy on this task. We also discuss the data gathered by the Marmoset system, which has been used to collect over 100,000 snapshots of student programs and associated test results. |
This file has been generated by bibtex2html 1.76