Issues of assessment having been coming at me from a couple directions recently, and I've appreciated Jeff Rice's taking up of Latour's matters of concern in relation to assessment. Rice has promised a discussion of his concept of "networked assessment," which I look forward to.
In my experience, assessment runs from hell-bound good intentions to institutional pragmatism to bureaucratic cynicism. A la Latour, Rice notes that proponents of assessment would seem to view assessment procedures as revealing facts, namely, in the case of composition, does the curriculum make students better writers. The litany of absurd propositions that must be taken on to ask and answer this question is extensive. Why should comp make students "better writers" in the first place? Does Bio101 make students better biologists? Does History 101 make students better historians? But I digress. The result of the absurdity of assessment is that, as a WPA, one could be in a postion of thinking "I don't for one second believe that this 'assessment' tells me anything useful, but I will conduct it for pragmatic reasons because the university demands it of me." Certainly that situation opens the door for cynicism.
I would like to avoid that in my job now as director of composition. I think the answer begins with setting aside value-laden purposes behind assessment (e.g., that we do it in order to do our job better). In ANT fashion, the activity of assessment is descriptive, to allow the actor-objects to speak for themselves and follow the network of associations rather than leaping from the student paper to "the corporatizing university" or "ethnocentric academic discourse" or whatever.
So let's take the fairly typical example of portfolio assessment, where one gathers a random sampling of student writing from composition courses to assess. And let's say, like the WPA Council that one of our assessable goals has to do with the writing process. Not only do we want students to know what the writing process is, and not only do we want them to use the process approach (especially revising) when writing their papers, but we want them to produce "better" papers as a result (thus proving they have become better writers).
Honestly none of that makes any sense to me whatsoever. I can collect a bunch of texts. I can hire some readers, norm them, and set them to the task of quantifying those texts. But the notion that this tells me something about the student-author, or more distantly, that author's experience in the course, the course the student took, or (at the periphery) the program itself just strikes me as bizarre. If the people who wear Nike shoes are overweight, does that mean there's something wrong in the shoe factory?
An ANT-like assessment makes more sense where one traces the associations between curriculum, instructors, class meetings, students, composing practices, and texts. It would be labor intensive and in the end one wouldn't get a number that passed judgment on the program. What one would encounter are the matters of concern that shape how knowledge about our programs is constructed. One might see the stories of instructors and the decisions they make in balancing their many sections or TA's balancing teaching and coursework; one would listen to the overcrowded classrooms and the office space; the university CMS would have a word. One would have to abandon the notion that there is some perfectable curriculum or even university. One gives up, in Latourian fashion, these modernist utopian impulses.
In short, one might have to abandon the value-laden ambitions of assessment. Of course this doesn't mean that we give up trying to be better. It just means that being better is a complex ethical practice that we must continually compose rather than reveal.