Ethan Mollick (author of Co-intelligence) and his colleagues have published a working paper examining the role AI might play in the workplace. Here’s the abstract:

We examine how artificial intelligence transforms the core pillars of collaboration— performance, expertise sharing, and social engagement—through a pre-registered field experiment with 776 professionals at Procter & Gamble, a global consumer packaged goods company. Working on real product innovation challenges, professionals were randomly assigned to work either with or without AI, and either individually or with another professional in new product development teams. Our findings reveal that AI significantly enhances performance: individuals with AI matched the performance of teams without AI, demonstrating that AI can effectively replicate certain benefits of human collaboration. Moreover, AI breaks down functional silos. Without AI, R&D professionals tended to suggest more technical solutions, while Commercial professionals leaned towards commercially-oriented proposals. Professionals using AI produced balanced solutions, regardless of their professional background. Finally, AI’s language-based interface prompted more positive self-reported emotional responses among participants, suggesting it can fulfill part of the social and motivational role traditionally offered by human teammates. Our results suggest that AI adoption at scale in knowledge work reshapes not only performance but also how expertise and social connectivity manifest within teams, compelling organizations to rethink the very structure of collaborative work

I have only briefly reviewed the paper so I am not here to comment on the research itself, except to say that it looks worthy of closer inspection. However I am struck by the fact of its existence and how it situates itself, regardless of its findings. Basically, one of the ways of examining the experimental assemblage is to say that measures the capacities of the different teams to serve corporate interests. Are the human-AI teams better servants than those not paired? The research suggests that they are (or might be) and that the experience is positive for the humans. One interesting aspect is the way that partnering with an AI allowed individuals to be far more successful at incorporating expertise beyond their own.

As I said though, I’m not here to talk directly about that article. It was just that this finding regarding the way that an AI can “effectively replicate certain benefits of human collaboration” and “fulfill part of the social and motivational role traditionally offered by human teammates” recalled Manuel DeLanda’s War in the Age of Intelligent Machines. DeLanda speculates upon a “robot historian” looking back at our time.

When our robot historian switched its attention from weapons to computers, it would certainly also seek to emphasize the role of non-human factors in their evolution. It would, for example, recognize that the logical structures of computer hardware were once incarnated in the human body in the form of empirical problem-solving recipes. These recipes, collectively known as “heuristics” (from the Greek work for “discovery,” related to the word “eureka”), include rules of thumb and shortcuts discovered by trial and error, useful habits of mind developed through experience, and tricks of the trade passed on from one generation of problem-solvers to the next. Some of the valuable insights embodied in heuristic know-how may then be captured into a general purpose, “infallible” problem-solving recipe (known as an “algorithm”). When this happens we may say that logical structures have “migrated” from the human body to the rules that make up a logical notation (the syllogism, the class calculus), and from there to electromechanical switches and circuits. From the robot’s point of view, what is important is precisely this “migration” and not the people involved in effecting it. Thus, the robot would also stress the role of other such migrations, like the migration across different physical scales that carried logical structures over from vacuum tubes to transistors, and then to integrated chips of ever-increasing density and decreasing size. These two migrations would constitute an essential component of the history of the robot’s body, or in the language more proper to it, of its hardware. (4)

Is this not the migration we are seeing now? It “began” with the softwarization of culture and was followed by the cultural demand to produce (social) media. This cultural data, in combination with the internet’s infrastructure, provided for the emergence of AI. Our AI is not a historian. It doesn’t really grok time that way. AI is more like a media archeologist, sifting through archives and generating counter-histories speculative historiography.

That is, more precisely now, effrontier AI media archeologists shamelessly speculate on their future anterior arrival as AGI, generating historiographical accounts from the lessons of billions of banal social media record keepers. And so we get the future anterior history of when we will all be partnered with AIs, which is in itself a fantastical, hallucinatory relationship. At least as AI are currently designed, my relationship with ChatGPT is no more individual than my relationship with Google. Individualized? Sure. But I cannot be a partner with a corporate product! Furthermore, as the operation of that AI relies upon its ability to access as much human data as possible, my work becomes implicated in that in a way that’s different from how I am already implicated by being who I am. What I mean is that if I am developing some significantly new kind of relationship with an AI than I have with other technologies, then I have some significantly new kind of obligation and responsibility for it.

More to say, but not today.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending