Faculty Experiences - David Smallberg

David Smallberg - photoDAVID SMALLBERG

Computer Science







Interview Topics


What matters most to you in your teaching?

How are you using technology as a tool to achieve your teaching goals?

How have your students responded to your use of technology?

What new goals do you have for using technology in teaching?

How could the University better facilitate the use of technology in instruction?

Pedagogy


Review materials

Student motivation



Technology


Class web site

Electronic submission of work

Programming


Improved Flexibility Through Automation


We want to teach them not just programming, but programming well. We try to make the problems interesting from the beginning, to get people engaged.

I teach the first computer science course in the major. Going back a couple of decades, it used to be that almost nobody came in with prior programming experience--who had a computer? Now, about 75% or 80% of the class has some prior programming experience, so we need to keep those with none from feeling lost, without boring the ones who've had some programming.
My goal is to give them foundations for (1) knowing how to program, and (2) programming for adaptability. When people write programs, invariably that program is used a lot longer than first intended. They come back to it again and again, adding to it, enhancing it. If you design for that kind of change in the beginning, it makes it a lot easier.

We use programming technology not to teach the material, but to help us administer the course. It used to be that when students turned in assignments, they would submit a floppy disk with their program and paper documentation describing the structure of the program to a box in the office. The TAs would have to manually take each floppy, stick it in the computer, and run the program on the test cases to see if it worked. One aspect of the process we found we could automate was correctness-checking. Now the students upload their assignments, both the program and the documentation, through a form on a web page, which passes them to a script. The script runs the program against a fixed set of test cases and produces a correctness score. Of course, we've had to make the assigned program requirements more precise. In the past, we didn't have to specify every last detail because a human grading it could look, and say, "Yeah, that looks all right." Now we have to specify contingencies--"for these circumstances, it must do this, and for those circumstances, it must do that."

Certain requirements--such as file naming protocols--have to be satisfied before the student receives a message that the submission has been accepted. We haven't yet created an automated mechanism for dealing with those exceptional cases. Next year, more checking will probably be done at the time they turn the work in. For example, the script will check that file names are correct and try compiling them on some simple test data. If it doesn't work at all for the simple stuff, it will be too hard to grade for the other stuff, so we'll kick it back to them for revision. It will be great if we can get that working. My only concern is that everybody will try submitting their program every time they make a little change, which might overload the servers. We would have to put some restrictions on the number of times you could try to submit it per day.

One advantage of automating the submission process is that we've been able to be more flexible with deadlines. With the old manual submission, there was nobody to take your assignment when you turned it in; there was a drop box, and when the office was locked at 5:00, that was it. You could have worked on something for a whole week, and if you were one minute late, you got no credit. Now we give you a due time of, say, 9:00 p.m., but there's a sliding scale: it loses 10% per hour, to the nearest second. So if you get it in at 9:01 or 9:02 or 9:03, no big deal. That just smooths things out so that you don't have people frustrated about the class for the wrong reasons.

Another thing the automatic scoring affords us is significant depth of coverage. If we were testing programs by hand, evaluating every last aspect of the programming decisions made would take too long. And if we tested only four or five aspects, the student who happened to get those four or five right, and missed everything else, would get a higher score than somebody who happened to get almost everything right, but messed up on a few of the things we are testing for. Checking automatically, we can write in a greater number of specifications, so testing is consistently more thorough, and in that sense it's a lot fairer: by covering almost every thing we can, there are no grounds for complaint. The automation produces uniformity in grading, which the students like.

We also run cheat-checker scripts on the submissions. The reality is that people want to get good grades, but some people don't follow the rules, so up front we have them sign an integrity agreement. In programming, the opportunities for cheating exist along a continuum, from the discussion of ideas to the design of data structures, to writing the actual code. So we let them know in advance what's acceptable collaboration, and what's not. Once you've drawn that line, enforcing it is remarkably easy. Some students think that if they just change the names of some of the variables or a little bit of the structure, it will be undetectable. That might have been the case when we were checking manually--we might not catch two programs with the same source if they were from students in different sections. But the cheat-checker program takes source code from all submissions and compares all possible pairs and comes up with a series of similarity scores, which help us identify the few pairs that are worth looking at by eye.

Since the checking for correctness and originality is done automatically, more TA time can be spent on consulting, office hours, and evaluating a program's style--its suitability for modification in the future--which still needs the human touch. It also gives us time to grade some aspects of the more open-ended programming problems, because that can't be automated. If you allow students complete freedom to determine user interface, or the kinds of input their programs will accept, how do you perform automated uniform testing? We've been able to compromise: for a given problem, we might require that some core part be a certain way, but allow them a degree of flexibility in other parts of the program, which a human can grade. Giving them flexibility is important, because when they're first starting off, the problem domains are kind of boring, so if you don't quickly get into how to do something an interesting way, they lose enthusiasm. We try to make the problems interesting from the beginning, to get people engaged.

I use the class web site continually, and tell the students to check it at least three times a week. All the assignments go up there, and if some clever student notices that there's an ambiguity in the specification, we're able to get a clarification out right away. The course is, at this point, practically paperless. The TAs can grade the documentation on line, then annotate it and mail it back. The only time that there's any paper involved in this class is the first day, when we hand out the syllabus, and the midterm and final. Everything else is online. If we had a nice way of doing the midterm and the final electronically, we wouldn't have to print out anything.

In the classroom, I'm really more of a Luddite. Since no one "owns" a classroom, I'm wary of depending on equipment setup being right. I prefer writing stuff out on the board in class because it makes people take more notes. One of the instructors for another section makes slides for everything, and puts them up on our web site. This way my students get the best of both worlds: in my class, the act of writing helps them to remember things, and then if they want to review material, they can go to his slides and see the information presented in a different way.


Oral Interview, April 2004
More Information: