This is a continuation of this conversation about laptops in classrooms. Clay Shirky, Nick Carr, Dave Parry, and Steve Krause all have recent posts on this issue (that list is almost strange enough to be a Latourian litany). As I said last time, this is the eternal September of the laptop policy. And as I mentioned in that last post, there is clearly a real issue with the disconnect between laptops (and mobile phones and other digital media/network devices) and the legacy practices of college curriculum, classrooms, and pedagogy: these two sets of things don’t seem to mix well. The primary complaint is that the devices distract students. Dave talks about how his students perform better in class discussions with their laptops closed. And Nick Carr makes a good observation:
Computers and software can be designed in many different ways, and the design decisions will always reflect the interests of the designers (or their employers). Beyond the laptops-or-no-laptops-debate lies a broader and more important discussion about how computer technology has come to be designed — and why.
In my view there are a couple key issues at work here and they all revolve around the way we understand and value thinking, participation, and attention. If we begin with the premise that thinking is not a purely internal activity then we realize that the tools and environments in which we find ourselves shape our capacities for thought. Obviously there are also internal (as in beneath the skin) processes at work as well. I don’t think anyone would really deny this. However, we might commonly assume some intrinsic consciousness that might be enhanced or inhibited by external forces. A different view would assert that we only have consciousness in relation to others. In the first case, one might look at laptops and ask if they make us better or worse thinkers, if they affect our ability to participate or pay attention. And of course they do in many ways, negatively, perhaps, in the classroom, but also expansively (for good or bad) in terms of participating and paying attention to the web. However, that view would seem to presume that the pre-laptop state is a natural or default state and that whatever technologies we employ should be valued in relation to the capacities and terms of that default state. E.g., do laptops make us better versions of default students in a classroom? We can debate this question, and for faculty sitting right now in such a classroom, it is a question worth asking and answering. But it is not the question that interests me.
Carr proposes a different question in relation to the design of computer technology. To put the question in my terms, he is asking about the assemblage of attention that these devices are designed to produce. One answer is to say that these devices are flexible in that regard. Obviously they can be shut off. That’s one answer: the powered-down laptop still participates in an attentional-cognitive network. It just doesn’t do much for us. And when they are opened full throttle? Then not only do we have access to the public Internet and our various personal accounts, we are also subject to a wide range of push notifications. Even without the pusher there, the thought might prey on our minds that certainly people are emailing, updating, tweeting, etc. And you’ve probably seen articles (like this one) about how addictive email can be. Without suggesting too sinister a motive, it’s unsurprising that companies design products that fuel our desires though not necessarily our best interests.
Looking into these questions of design and developing ways to intervene in the design process or otherwise build upon it are important directions to pursue. However they are also only part of the puzzle. We must similarly look at the motives behind classroom design, curriculum, pedagogy and other educational-institutional policies and designs.
Setting aside the immediate challenges facing faculty and students this month, we need to think more broadly and experimentally about how to design the assemblages of attention (and cognition) that will drive future learning. We have a fairly good idea of what the past looks like. Those spaces were designed to focus attention on the teacher and encourage individualized student activity (notetaking, silent reading, worksheets, tests, etc.). They were organized as a series of fairly short and discrete linear tasks: listen to a lecture, take notes, complete a worksheet, take a test, move to the next class. The classes themselves were/are designed to be silos, organized by discipline. This is very clear from middle school through college. This requires relatively short-term single point attention (e.g. listen to a lecture for 20-30 minutes). For the most part, homework is similarly designed. It’s true that students can do homework and studying in groups, but since everyone has their own book and notes and everyone is ultimately responsible as a individual for demonstrating knowledge, the assemblage encourages individual activity. Over time, we build toward extending the period of single-point attention so that the graduate student or faculty member might spend hours focused on reading a single book or writing an article. (So that’s the story, except that I think the notion of an extended period of single-point attention is a fiction and I would suggest would be unproductive if it were true. But that’s for another time.) In any case, we define knowing and expertise as the cognitive effects of these activities: to know is to have engaged effectively in these activities; to be an expert is to have done so repeatedly in a single area of knowledge. We created reasonably efficient feedback loops between educational practices and workplace practices so that workplaces were organized around employees with disciplinary expertise and expanded capacities for single-point attention. And I don’t mean that only in terms of managerial/bureaucratic structures but also the physical spaces of offices and factories, the design of the work day, and so on.
I imagine the future will have a similar feedback loop between education and workplace. (BTW, I find it strange when colleagues complain about universities serving corporate interests, as if we haven’t been doing that for at least a century, as if our current curriculum and practices weren’t constructed in this way, as if that relationship wasn’t integral to what we do.) I don’t think workplaces have any better idea of what this future should look like, though I do think they are more volatile than universities and thus quicker to change, for good or ill. However, we should think about what knowing and expertise look like in a digital networked environment, what work looks like (academic or professional), and then what assemblages of attention we want to build to support those activities and outcomes.
Here’s a brief speculative comparison. I’ve never taught a large lecture course but let’s say I had an introductory class of 200 undergraduates in my field (rhetoric). Conventionally we would have an anthology of rhetorical texts (like this one) or some other kind of textbook (like this one, I guess). I would lecture, respond to student questions, and try to create some other opportunities for student interaction (like clickers maybe). Then we could have a Blackboard site for quizzes and discussion boards (and I could post my PowerPoint slides!). Then a mid-term and final. Maybe some short writing assignments. Maybe more if I had a TA to help me read student writing. In that classroom, laptops probably would be a distraction.
Now let’s imagine a different structure. Still 200 students, but let’s not call it a class. We aren’t going to measure students by having them demonstrate disciplinary knowledge on a test or in an essay. Instead, they are going to engage in rhetorical activities, using rhetorical concepts, methods, and practices to do something: persuade some group to take an action, inform a particular audience about a topic, do research into the rhetorical dimensions of some matter of concern. They will need to work collaboratively. They will need to integrate learning from other parts of their curriculum as well as other experiences. They will need to draw upon my expertise and work with me to define the parameters of their activities. This requires a different kind of assemblage of attention. We probably don’t need the lecture hall with all the seats facing the podium. I could still give lectures, but they would be far less useful as they would no longer tie so neatly into the working definition of what it means “to know.” On the other hand, it would become more important to figure out how to make productive use of those contemporary devices of distraction. Of course they could still distract, still have a negative impact. We would still need to learn how to use them, but now we would have built a structure that supported their use rather than continuing to use a structure designed to support legacy media technologies.
What kind of workforce are we imagining here? One that can work independently in small groups without panoptic supervision. One that works across disciplines and cultures in collaboration to integrate knowledge and experience from different perspectives. One that can use emerging technologies productively to find, evaluate and manage information as well as communicate and produce knowledge. Something like that. Will every student become that? Of course. Not every 20th-century student became the ideal “organization man” either. Nor do they need to, nor should they. But inasmuch as our legacy curriculum and assemblage structures pointed toward that organization man, we need to think about building new structures that point elsewhere.