The removal of Trump from various social media platforms has been big news, as is the de-platforming of Parler by Apple, Google, and Amazon. There’s a lot of conversation about this in relation to the first amendment. I’m not a constitutional law expert, so I’m not going to focus here on the strictly legal aspect. I’m going to stay in my lane and talk about media theory.
Of course everyone one knows that speech is different from writing and both of those are different from the press, which circulates writing. Even the framers knew that as they identified both a freedom of speech and a freedom of the press. And yet, we also retain a phonologocentrism where speech is primary, so we have come to think of mechanical and digital media as kinds of speech or kinds of presses and the activity of the press as an extension of the activity of speech, which is communication. My basic argument here is that, however we choose to proceed, this understanding of media is dangerously flawed. I don’t think that’s remotely controversial; basically anyone who studies media technologies would tell you that. Put differently, we need to understand the different operations of media if we want to understand their social, political, and cultural effects and deliberate upon the role we want to them to serve in our society.
The other aspect of this, as we all also know, is the massive amount of money-power represented by big technology companies and the individuals who own/control them. Right now we are talking about propaganda, hate speech, and conspiracy to commit insurrection/sedition/terrorism/treason. We also have continuing concerns about data privacy and monopolies. Without really getting into the legal bits, the main viewpoint here is that the first amendment doesn’t prevent private companies from restricting speech. We have also said that they bear no legal responsibility for what is shared on their platforms.
But back to the media technology part of this. We all know there’s a lot more to social media than mere talking or typing. There is hardware and software comprising your smartphone or computer. There are data connections, networks, and servers. There’s the social media platform itself with its software, algorithms, design, data storage, etc. Plus, of course, there are all the other users on the platform with whom we interact. Certainly none of that stuff can be imagined as speech, right? As I think I mentioned in my last post, do I have the right to speak through a stack of amplifiers in my front yard? And if I do, do I also have a right to the electricity to power them? I know I don’t have a right to access Facebook because I don’t have a right to a data connection or a smartphone. By loose analogy, I have the right to write things, but I don’t have the right to pen and paper.
So what continues to be our current situation? Social media platforms have been to create and circulate white supremacist propaganda, to attack and vilify those who oppose white supremacy, to plan terrorist attacks, to record and publish real time accounts of acts of insurrection and terror, and even to coordinate such acts. All of these, except perhaps the last, might be conventionally protected by free speech. There are various solutions that rely upon our ability to identify and restrict media that require identifying and making judgments about content, but I’m going to set those aside here. Instead I’ll offer a few examples of structural matters.
- Reduce the number of friends/followers/etc. or members of any single group to 5000. No more Trumps tweeting out to millions of people. To reach millions you’d need a series of people.
- No aggregation or timelines. If you want to see what your friend is doing, visit her page, then visit your next friend’s page, and so on.
- No liking or one-button sharing of content. If you want to say something similar then say something similar. If you want to leave a comment, then leave a comment.
- And regarding comments: no automatic approval of comments. Each comment must be hand-approved by the account owner.
- No advertisements and no data collection by platforms. Users pay a subscription fee.
To quote a line from the Monty Python skit about the Whizzo Chocolate company, “our sales would plummet!” Yes well, maybe corporations whose profit-making function is data collection and surveillance and who are willing to burn the country down by providing platforms for lies and terrorists maybe shouldn’t be the richest in the world. From my perspective these strategies all share the same aim which is to reduce the viral potential of media, to slow down their rhetorical velocity. Yes, from a certain perspective it would make social media less usable, if by usable you mean addictive. It would be more usable if you by usable you meant enabling you to connect with your friends without destroying the country. Or think of it this way, maybe you find cooking dinner with a flamethrower to be fun, fast, and easy, but I think you’ll find using a stovetop to be fairly simple too. Plus you’ll stop burning down your neighborhood.
One reply on “speech, freedom, and the crunchy frogs of social media platforms”
Without getting into finer points about the lines between speech and writing, free or otherwise, I think I’d add one more item to your list maybe as an alternative to #5: social media companies need to be held liable for what users post on their sites much in the same way that conventional print publications are liable– and I’m thinking specifically here of the provision on the communication decency act that protects these companies. I believe it’s section 230.
In the “old tymey” days of people talking with each other on phones, the argument was pretty simple: the phone company had/has no way of really knowing what you’re saying while on the phone– unless there’s something like a court authorized wiretap or a not so legal eavesdropping operation by the phone company. The same kind of logic applied when people started blogs and stuff to allow comments (like this one!). When I ran a blog about EMU gossip and news, I had a place in “the rules” for posting on the site that referred to this because I didn’t want to be even remotely liable for something stupid someone posted. Anyway, this all made sense because of course how could the phone company know? How could I stop people from posting a comment that was awful?
But knowing EXACTLY what its users write/say– and who they are, where they live, what kind of politics they have, what sort of toothpaste they like, what they were shopping for online, etc., etc.– is how these social media companies make money. They have all sorts of behind the scenes ways of preventing people from posting stuff like porn right now, and they also have lots of ways of suspending users and such too. So why shouldn’t they be liable for allowing out and out hate speech and calls to violence and stuff like that?
And of course, that’s why these social media dudes are now bending over backwards to kick out Trump and these Parler Qanon people because they don’t want their 230 protections taken away.