This is basically what you think it is. No intentional AI use. (I realize we’re soaking in it, but this is about experience.)

This is something I am doing for myself. As part of my research approach, I engage with the media I am studying. In the 90s I learned HTML, then later other tools. Then it was blogging and social media. But I’ve largely ended my engagement with social media, because I don’t really study that currently. I’ve been studying AI. So I’ve been interacting with ChatGPT for a little over a year, certainly over 1000 hours of interaction. Some other time I can talk more about why, but for now the point is that this is where I am in relation to these tools.

In short, I’ve been using AI for my work fairly heavily for a year. Now I am stopping for one month to see how that feels.

One of my interests here is in relation to the concerns Stiegler raises about the dangers of exosomatic dependency. In this context, if I find myself incapable of doing my work without AI, then have I increased my capacities as a scholar or reduced them?

Many questions might follow from that interrogation. The answer will surely be that I will need to reinvent/recall other habits of thought. I don’t want to overplay the stakes though. I don’t expect it to be difficult. To the contrary I hope it will be interesting and instructive, at least for me.

Why am I doing this now? Because I am not going to step into the world of agential AI without taking a pause. Also, I think we are in a historical moment where we have to figure out what “AI ethics” really means, if anything. The acceleration of these matters gives me pause as well.

Simultaneously, we also find ourselves poised to offer students degrees and even general literacy instruction in which the use of artificial intelligence has been interwoven. I do not imagine any of us want to educate students to be dependent upon AI in the way we educate them to be dependent upon writing. Not only should we not value AI output as a proxy for writing (even though it easily fools us), we must be very careful with how we describe the operation of AI within any distributed cognitive process, including writing.

Currently I am thinking of AI output as non-ethical generative resolution. So when you ask a machine to perform that operation as part of your cognitive process, what results? More to the point, what happens when your capacity to do cognitive work of economic value requires on going participation with non-ethical generative resolution?

Well having spent a year doing exactly that on a voluntary, experimental basis, I am thinking I have some basis for discussing this on an experiential level. I am sure that I am far from the only one in this situation. With a little distance I plan to share some insights.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending