Categories
electrate politics

social media assemblages and “misinformation”

At its core, a new materialist digital rhetoric tells us that media/information is made accessible, strong, and durable through human-nonhuman assemblages in digital media ecologies. This is a basic insight from Latour, DeLanda, and others. To expand briefly on this core:

  • there is a reality in which we participate;
  • part of that participation is our construction of knowledge about that reality;
  • those constructions can be strong or weak and can result in a variety of capacities;
  • those those capacities we can shape and are shaped by our reality (i.e., we are “made to act”).

We are all painfully aware (and perhaps numbed and tired) of the spread of “misinformation” through social media. I use scare quotes there because misinformation suggests a certain kind of intent (or lack thereof) on the part of the spreader. It also implies a judgment on the part of the receiver. Misinformation is when the receiver judges the information to be false and the sender either believes it is true or hasn’t given the matter much consideration. We can compare that with disinformation, which is when the sender knows the information is false and has disseminated it nevertheless. This is a distinction that leaves a vast grey area however. Regulations prohibit mass media advertisers from lying about their products. However we all know they are capable of being suggestive in a misleading way (e.g., drink this beer and hang out with beautiful happy people). When it comes to politics, what do we call assertions that Biden is a socialist or will “take away your guns”? Can we say they are simply lies? Mis/disinformation? Are they valid opinions? Or are they just ad hominem attacks or some other logical/rhetorical fallacy?

In short, there are a constellation of rhetorical strategies, including mis/disinformation campaigns, that are employed for political purposes. In part users are receptive to misinformation because the media resonates with a larger ideological message that operates in that grayer area. E.g., white supremacists certainly employ mis/disinformation, but it is hard to say that it is their belief in the truth of those claims that drives their racism. Providing clear evidence that their claims are untrue is unlikely to shift their views. A new materialist digital rhetoric suggests that it is the human-nonhuman assemblages that construct and reinforce these constructions of the world that lie at the center of the problem. Generally speaking, just as science cannot be conducted without labs and education requires schools, there are extremist right wing assemblages.

The New York Times has two recent stories about misinformation superspreaders and Facebook’s mixed feelings about controlling them. As these articles describe, network analysis is one of the first tools in understanding such assemblages. “A look at a four-week period starting in mid-October shows that President Trump and the top 25 superspreaders of voter fraud misinformation accounted for 28.6 percent of the interactions people had with that content… Avaaz compiled a list of 95,546 Facebook posts that included narratives about voter fraud. Those posts were liked, shared or commented on nearly 60 million times by people on Facebook…just 33 of the 95,546 posts were responsible for over 13 million of those interactions. Those 33 posts had created a narrative that would go on to shape what millions of people thought about the legitimacy of the U.S. elections.” As such, it is not difficult to identify the users and messages that are at the center of mis/disinformation campaigns. The question is how do we act on that information?

I suppose if you’re Facebook, one thing should be fairly obvious. With 2B users, there are going to be many who assert claims you know are factually untrue and are doing so to promote some political argument. The first problem is identifying those posts. The second is deciding on a course of action. For example, Facebook “trained a machine-learning algorithm to predict posts that users would consider ‘bad for the world’ and demote them in news feeds. In early tests, the new algorithm successfully reduced the visibility of objectionable content. But it also lowered the number of times users opened Facebook, an internal metric known as “sessions” that executives monitor closely.” However, another algorithm “called ‘correct the record,’ would have retroactively notified users that they had shared false news and directed them to an independent fact-check. Facebook employees proposed expanding the product, which is currently used to notify people who have shared Covid-19 misinformation, to apply to other types of misinformation. But that was vetoed by policy executives who feared it would disproportionately show notifications to people who shared false news from right-wing websites, according to two people familiar with the conversations.” So this is the market-driven problem. Facebook, Twitter, and so on have a significant user base who use their product as a means to spread right-wing mis/disinformation, and they don’t want to lose that customer base. And this isn’t just about US politics, as this Avaaz report suggests regarding the spread of public health misinformation on a global scale via Facebook.

What might we do? The first that comes to mind is shifting the market calculation for social media corporations. Maybe that means government-based regulation, but it might mean user communities. E.g., if we can demand these companies make public their policies, efforts, and results in mitigating mis/disinformation, then perhaps through user action (or threat thereof) we can convince these corporations that it is in their financial interest to act more responsibly. Perhaps the misinformation community will just take off for other social media pastures, as they have already been doing, but limiting their impact is a significant first step.

And certainly much more effective than trying to convince people that they are holding onto to false information when, in the end, they don’t really care if the information is true or not.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.