Ok, we’ll see if these two topics are actually just one. I’m going to start with “assessment numbers as quasi-objects.”
So quasi-objects is a concept I/we get from Serres and Latour. Without going too far down that path, it’s an understanding that actors do not reside simply on one side or other of the object/subject or nature/culture divide as our Modern ontology would have us accept. Numbers are an important example of this. To follow, Latour’s modes of existence, would we say they are beings of reference, of fiction, of technology? I can’t really get into the whole numbers thing here. I just want to write about assessment numbers (which is where there’s hope I’ll circle round to the WAC topic).
These days there are all kinds of available numbers about students, a whole demographic-algorithmic layer of student identities. I imagine there are similar numerical identities for faculty, though I don’t care to think much about that. Then there are the numbers one might generate through familiar assessment processes: student and faculty surveys, scoring of student work, etc. As WAC director I’m wrapping up an assessment of our WAC curriculum and we have managed to connect our rubric-based assessment of students on our learning outcomes with the data we’ve collected about those students–their demographic data as well as their academic performance. Without going into the details, basically the course are going well and the main correlation I can see is that students who do well (in terms of GPA) are generally meeting/exceeding our expectations on the learning outcomes. In short, students who do well do well. I mean, we did find out some useful things as well, but I’m not here to discuss that.
Instead, in terms of quasi-objects, I’m thinking about the operation of these numbers as simultaneously political and scientific acts of representation. That is, they are an empirical representation of the WAC program but they also speak for the students and faculty, representing us in political deliberations about university policies, investments, and so on. The other day some of my colleagues were, half-jokingly, calling assessment a kind of epideictic rhetoric. And I suppose it is (or could be)–a kind of ceremonial affair where someone might be praised or blamed. However I am more interested in assessment as a kind of deliberation on what should be done.
In his contribution to Thinking with Bruno Latour in Rhetoric and Composition, Joshua Prenosil writes, “Latour is a rhetorician of inartistic proofs because he studies the available means of establishing coagency, power that is always achieved through imminent material-semiotic actors in a network.” Arguments based on assessment likely rely on inartistic proofs and upon the strength of the alliances among actors and quasi-objects upon which the assessment is composed. Numbers are quasi-objects but so are the procedures, the forms, the rubric, etc. We did indirect assessments as well–surveys and focus groups.
Among these quasi-objects are the elements of the WAC media infrastructure. It’s as difficult to imagine doing assessment without spreadsheets as it is to think of writing a book without a word processor. Of course it can be done, but I’d have to go about it in a very different way. I’m not sure how I would have coordinated the efforts of nearly 100 faculty in nearly 20 departments. I certainly couldn’t have sought out correlations on so many different dimensions. Media infrastructures are integral quasi-objects in strengthening the coagency the solidifies inartistic proofs.
Learning that students who do well do well isn’t especially enlightening, but it does help to create some face validity for the work we’ve done. That is, if students were otherwise doing well, getting good grades in their writing courses and in general, but performed poorly on the assessment, then that would mean something was broken, either with the assessment or with the curriculum. That data point, on its own, doesn’t tell us much about the students who are struggling or why they’re struggling, let alone what we might try to do about it. But it probably does lend some credence to the claim that the students we are representing as struggling are in fact struggling.
As universities, for good or bad, become more data-driven, data becomes a cornerstone of institution-wide efforts, like improving retention or time to degree. There’s already good evidence that writing courses are integral to that. Students who struggle in FYC are likely not return to college the following year. Maybe it’s because they have challenges with writing or maybe with writing in the English language. It might be because they’re having more general problems with figuring out how to be successful students. Media infrastructural quasi-objects facilitate the connections between WAC assessments and these institutional priorities.
All of this allows us to compose and use inartistic proofs to make arguments in favor of improving practices. Of course it still can be hard to persuade someone to make a significant investment in improvement. That’s a longer process.