I attended an event on campus about AI (specifically generative AI) and how it was being approached in different disciplines and units. We are where everyone basically is, which is “Keep swimming. I think I see an island ahead.”

One shared worry about generative AI is that its results are unreliable. Setting aside “cheating,” students use AI to summarize longer readings, to explain concepts, and to help them write, but they are unable to evaluate effectively the texts AI provides them. One solution to this problem, faculty suggested, was that someone else teach students critical thinking and information/digital literacy before they get to their courses. Sure. We can do that while we’re teaching them writing.

Here’s the underlying problem. We have an operating image of thought, an understanding of what thought is and how to make thought productive of knowledge and truth through the use of various methods. This image and its methods are, of course, products of human history. They change over time, in different cultures, and so on. American university concepts of critical thinking and literacy are largely a product of the twentieth century. The methods of critical thought, while based on received traditions, were conditioned by the material circumstances of the time.

So literacy obviously begins as the ability to read and write in print. We extend its use to refer to a competent level of knowledge in a subject (presumably gained, at least in part, by reading about it). We also horn in some of the principles of critical thinking by suggesting that the literate are capable of good judgment. E.g., to have digital literacy means to be a good judge of digital media and this is gained by reading about digital media…?

I think AI presents us with a more interesting problem, which is that we don’t have a good, working understanding of the relationship between human experiences of conscious thought, thinking/intelligence beyond human experience (as in AI), communicating, and information/knowledge. And the common mis/understanding at work in academia is crap. No one is going to acquire “digital literacy” by taking a course.

No, I’m not going to define digital literacy here! The questions being raised here, by my colleagues, and all over about these matters do not need to be, indeed cannot be, resolved as a starting point for education. They are fundamental inquiries that can drive the arts and humanities as they turn more of their focus on emerging media and technologies. In my view, historical and material processes have shaped the cognitive capacities of human populations through our participation in “cognitive media ecologies.” Through these processes, human cognition has been made productive within cultural definitions of productivity. As those ecologies change, so does the rest of it.

And, as always, unless you believe human thought is a product of divine intervention, then it is really just another example that shit happens.

One response to “ai, ai, ai: critical thinking and literacy won’t save you”

  1. […] I think AI presents us with a more interesting problem, which is that we don’t have a good, working understanding of the relationship between human experiences of conscious thought, thinking/intelligence beyond human experience (as in AI), communicating, and information/knowledge. And the common mis/understanding at work in academia is crap. No one is going to acquire “digital literacy” by taking a course. —Alex Reid […]

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending