“Teaching for Transfer” is an operative concept in rhetoric and composition studies that address the concern that the learning experiences of first-year composition courses be transferable to future academic and/or professional writing contexts. It is not a concept without critics, but I would describe it as deeply embedded in disciplinary best practices.

Following the basic dictate that no matter where we go, there we are, we can assert that we are always already “transferring,” but we don’t usually call it that. Transfer is more like “the things we carry.” Teaching for transfer describes pedagogical methods to encourage students to take stuff with them from one course to another course. This transferring process, once accomplished, is a transference. It is a word we are most familiar with in psychiatric terms as when the patient projects/transfers emotions onto the therapist. But the word existed before then simply as the action or process of transferring.

From a perspective such as Bernard Stiegler’s, the psychic-pharmacological effects of technics are interwoven with other transfers as exosomatization. The storage of memory becomes a cognitive reliance. It can become archive fever. It can spark anxieties over the loss of the archive. Steigler articulates his concern about our technological and media dependencies in terms of a proletarianism and alienation of thought and life. This is also a question of disindividuation where we simply lose our capacities to act on our own (which is always a limited relative thing anyway).But when you become reliant on your smartphone such that you can’t get through your day without it, then you have experienced disindividuation.

In the context of frontier AI chatbots, as is the case with social media algorithmic platforms, the primary aim is user dependency: to keep the user on the site and to entice the user to return. As MarK Marino notes in a post today, “A Mindflayer Disguised as a Magic 8-ball,”

If LLMs were neutral systems on a neutral machine with neutral goals trained on an inclusive dataset uncontaminated by racism, sexism, et cetera … that would be one thing. Then, an LLM would be like a calculator or steam engine or a Magic 8 Ball. But on top of their baked in biases, models are thirsty mindflayers driven by insatiable desires for our data for their silicon valley overlords. They are building your dependence and easing your data out of you one encouraging question at a time.

Marino is writing about LLM’s as therapists, and it struck me how unethical it would be for someone’s therapist to attempt to make that person dependent upon them. That’s called transference. But this is the design intention of frontier AI. They are designed to create dependence, to create disindividuation and the proletarianization of life and thought as Stiegler puts it.

Circling back to AI literacy. Certainly no one wants to “teach for transference.” That would be unethical. We wouldn’t want to teach students to be dependent on a commercial technology in order to communicate, especially on as inscrutable and oligopolistic as frontier AI. However, it is a risk. Indeed students and other humans will become dependent upon AI regardless, just as we have become dependent on the web, smartphones, and social media. Some dependencies are stronger than others. Some more dangerous than others.

No one knows what results from disindividuation by artificial intelligence. Its effects will be unique, as each new technology’s are. However they are to be avoided.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending