Social Media -- Threat or Menace?
Are Facebook and Twitter a threat to democracy? Threat or Menace? What could or should be done about it?
This morning, MSNBC "Morning Joe" were on a campaign against Twitter, but especially Facebook for their role in amplifying the radicalism that led to the insurrection at the Capitol, culminating in Joe Scarboro's rant that "Facebook should be broken up into a million little pieces" (as best I recall). NPR radio had a series of pieces highlighting social media's critical role in fostering radicalism, though they weren't suggesting remedies.
I think these are both a kind of "shoot the messenger" response to what has clearly been a credible threat to American democracy. But it's not the platform, the real threat is the users, who will find another means to organize and collude if these venues are denied them. Nevertheless, there are some adjustments the platforms could implement that might mitigate the risk.
First off, are Facebook and Twitter in fact responsible for enabling the haters to organize, incite violence and foment revolution? These are not Morning Joe's exact words, but the claim was Facebook is responsible for <bad things>, because they knew about it, didn't do anything about it till it was too late and in fact were trying to profit from it.
Yes, the social media platforms are opaque about how they work, and that fosters dark suspicions. But a big part of this is because they're working with algorithms that are are intended to be free from developer bias about what topics and policies should be used in the recommendation. It's like Amazon's "people who bought this also looked at that.."; at the end of the day, they cannot tell you "why" those other people looked at that, all they can say for sure is the statistics are what they are.
And, especially Facebook has been tone-deaf in the development of their platform and in defending it later. Second thing first, they didn't buy enough Senators to forestall questioning. Then they got all self-righteous about justifying their business decisions and success. [[expand, with illustrative incidents]]. And their platform is definitely more forceful and guided in the user experience than other SM platforms., constantly nagging you to allow more access to your data, offering friends of friends, flooding you with notifications and distractions. Twitter does that better, I think.
But these platforms are more or less subject-neutral. They are trying to help you find contacts and content about anything they think you're interested in. Contrast with Parler and Gab, which are explicitly focused on some subjects. So they amplify the bad along with the good.
I'd say, the real problem is the users these platforms were built to serve. The platform reinforces people's natural tendency to be social within the tribe and suspicious of strangers. It doesn't help them resist ambitious, charismatic shamans, priests and kings who would like to leverage community support into an easier life for themselves. (As I write this, i realize this sounds a lot like professional politicians. Maybe the real source of friction between Facebook and Congress is that the latter can't stand the competition?)
This situation is similar to how we despise Big Pharma for developing and promoting opiods and blame them for ruining addicts' lives. We wish people could be less prone to chemical dependency, yet we need effective pain killers. So we wish Big Pharma could be a more responsible and maybe compassionate corporate citizen and protect us better from ourselves. Chemical dependency is arguably a more compelling weakness and dependency than sociological engagement. But social media can probably develop some effective controls without compromising their libertarian ideals or their profits.
I argue that sunshine is a great disinfectant. But every step we take toward greater transparency is also a step toward greater danger for the political dissident living in an oppressive police state as well as the mendacious rabble-rouser who is trying to avoid legitimate, just prosecution.
First, make it easier to distinguish real people from bots and troll farmers. User profiles should be more informative, including:
- reliable geo location -- so you know where the user is connecting from. This limits the russian troll farm's ability to use an all-american-sounding username. If user uses a proxy to hide location (as most ordinary Chinese must), what then?
- flag suspected bots -- how to identify?
- publish the user's advertising profile. Facebook, for sure, and Twitter, likely, have very detailed demographic, (people) connection network and areas of demonstrated interest information that they share (in various degrees of detail) with advertisers. Share it with the user him/herself and with the rest of the community. That's very radical, and will likely scare off many legit users. How to mitigate?
Second, flag each and every post with a factual content and reliability metric
- where is this content on the scale from pure opinion to pure (assertion of) fact?
- And how true is the asserted fact?
Since "facts" on the internet are more like "memes", meme identification algorithms can also identify asserted facts and capture the similarities in the way multiple sources assert the "fact". This is handy for bot identification.
Examples of how this would play:
- "I know the CIA killed Kennedy"
- Mostly assertion of "fact"
- "Fact" not supported by evidence (citations of affirmations and refutations)
- "Fact" is also asserted by [[list of other people]], is agreed by [[n]]% of people [[in various groups]]
- "My dog is the goodest boi"
- Mostly assertion of opinion
- Reference to "goodest boi" meme for dog. What connections does "goodest boi" lead to?