Yes, a government bureaucracy can regulate Facebook without eviscerating the first amendment.

As the title suggests, I’m going to try to make an argument that is completely possible to regulate Facebook and other social media without reactively censoring objectionable content. I’m writing this up following some Twitter conversations in which it became clear to me that I can’t overcome the common objections in 280 character quips. The idea, in a nutshell, is this: Clean up algorithms themselves so that the kind of content we don’t want doesn’t take over the network.

Facebook would like you to believe that monitoring content is impossible. If I was more cynical, I would believe the ham-fisted efforts we’ve seen so far intentionally miss the mark. We’ve all heard many stories of friends getting put in “Facebook jail” over anodyne posts that somehow triggered their mechanisms. I’m not that cynical enough to believe they’re doing this deliberately, but I think they’re putting a very small token of their R&D budget into solving these problems. There’s a basic conflict of interest and nobody is forcing them to do anything about it. That’s why their content moderation algorithms suck. My thesis here is that this can be changed.

Now to be very clear, I’m not talking about a panel of all-powerful Presidential-appointed moderators who zap objectionable content as it comes out, scarring the Facebook landscape with lightning bolts while purging the world of all hateful thoughtcrime. This is absolutely not what I’m talking about. Not only would this probably fail, but it will also be abused, especially if the moderators are appointed in bad faith.

I’m going to outline something crude and slightly wonky. I’m not a government bureaucrat, so I’m not going to do this very well. My goal is only to argue that it can be done.

What The Algorithms do

To summarize, Facebook, I believe, scores each post according to some metric and uses this score to decide which one to put in your face.

So the first step is to ask them to show us what goes into this you-score. Not the entire algorithm, just the objective they’re trying to optimize. This is not as complicated as they would like you to believe. We wouldn’t dare ask them to reveal their entire AI stack and the IP therein: The architecture, the number of layers, the parameters, the hyperparameters, the state-of-the-art advances in NLP. We only need to know what is being predicted. Whatever is being predicted has to be something specific because this is how machine learning works; you create a model, and you evaluate the model based on how well it predicts something you actually can measure. Then you tweak the model so that it predicts this objective better. The power of modern-day machine learning is that you can tweak the model incrementally trillions of times so that it works really damn well.

To reiterate this point, in order to train any machine learning model, you have to have your model make a prediction of something specific so that you can evaluate if your model is garbage or not. You have to have that specific prediction be something real and observable, otherwise, you can’t train the model. If you know a few things about machine learning, you know that there are probably huge sets of other intermediate variables that don’t necessarily have real-world meaning, they might describe latent spaces or embeddings — we’re not asking for these variables which can be in-house proprietary IP, we’re asking for what the output is supposed to predict.

Step 1 in regulating Facebook: They can keep all of their 99.9% of the their AI in a blackbox, but they have to tell us what hard predictions the score is based on. Sure, they will throw a fit and talk about how this cramps their style, but pretty much every industry has annoying regulations that we just deal with. This won’t end Facebook.

Now if you’re still with me, surely some of you are still waiting for the moment you get to scream out, “but the first amendment.” We’ll get to that, but so far we’re fine — I think. All we’ve done is what Congress can do according to the commerce clause, by my estimation. (I’m not a legal expert, but I do know a few words.) All I’m suggesting so far is that Facebook has to declare how they are using the content that others have created in order to sell advertisements. To my unassuming legal mind, we’re not talking about regulating speech, we’re talking about regulating commerce, which Congress has every right to do. They might complain about privacy, but corporations have no such right.

The Regulators

To begin we require that any company whose business model is to deliver content creators to advertisers on a certain scale and uses algorithms to tailor the content and advertisements(I won’t be too precise here, but it’s feasible to describe pretty well what a social media corporation does) is required to register as a social media company if they want to operate the US. They register with some agency created by Congress, which let’s just call the Social Media Oversight Bureau, or SMOB. The social media company has to declare what the top decision layer of the algorithm is based on.

Now SMOB makes rules about what this function can and can’t be. I don’t know exactly what the form would be, but they could say, for example, that interactions with future advertisements themselves can only account for 45% of the variation in this score, with appropriate specificity. But then we get to add a more controversial layer — we require that deciding algorithm balances between objectionable qualities as well. This is where I know I’m going to get the most flack, so I’ll try to explain this carefully.

For any piece of content that is created on Facebook, one could judge this content as communicating something which is false, one could judge this content as suggesting violence — there are quite a few other categories — racism, harassment, etc. The science of what is and what is not violent is imperfect, but it’s better than most people think. A picture of your dog curled up next to you watching Bridgerton is communicating something neither false nor violent. An ask for recommendations for burgers in Bend, Oregon is neither false nor violent. Most posts are not. A picture of the border wall in Texas with a caption, claiming this wall is on the border between Mexico and Guatemala, on the other hand, is objectively false. A post picturing Mike Pence in gallows with the comments “this is what we do to traitors” is suggesting violence. So how do we de-emphasize such posts? Working for the SMOB is a large team of content regulators. They take 1000’s of random posts coming through Facebook, and they score these on various axes. Say, anodyne posts get 0, slightly iffy ones get 1 or 2, iffier ones get 3, 4 and the horrible nasty ones get a score of 10. Or something. They send these scores back to Facebook. Facebook then has to come up with their own algorithms for predicting the truth scores, based on their own internal AI. My guess is that, if they actually tried to do this, the result would be fantastically accurate. Now we are not yet zapping speech. We are asking Facebook to grade a piece of content based on the probability that it contains false or violent information. Then, here’s the BIG REGULATION: We require Facebook to have use these violence, falsity and other scoring factors in computing their you-score for a post. The SMOB can come up with specifics for a formula or bounds for a formula.
Also, this will be verifiable and auditable: Facebook should be required to make available the scores for any specific post of sets of posts so that it’s verifiable that their algorithms are predicting these things with reasonable accuracy.

But Free speech

Your Objections

Objection # 2. “What if Trump Jr. wins in 2024 and takes over the SMOB and order that all of the truth-determiners determine that 2+2 = 5 and racism is not racist? “ Simple, you just set it up so they can’t. Create an independent SMOB run by an independent board. I don’t know, say 13 people on the board. They have lifetime positions, can resign at any time, and can only be replaced by a vote of the remaining people on the board. And these people on the board do all the hiring and firing of the people below them in the bureau. I’m not a legal expert here, but I can google: https://www.justia.com/administrative-law/independent-agencies/

“ Independent agencies are not subject to direct control by the president or the executive branch, unlike executive agencies. The leaders of independent agencies do not serve as part of the president’s Cabinet.

To create an independent agency, Congress passes a statute granting an agency the authority to regulate and control a specific area or industry. The statute provides clear guidelines for the objectives that the agency must work toward and specifies the extent to which the independent agency may exercise rulemaking authority. The regulations enacted by an independent agency have the full force and power of federal law.”

So in order for this to continue to function without Josh Hawley taking over and reprogramming The Truth, you simply have to believe that a group of 13 reasonable board members should be able to find other reasonable people to replace them as they retire or move on. The probability of this going off the rails anything in the near future seems quite remote. And if Josh Hawley can take over the board and declare that 2+2=5 is no longer false, we’re probably pretty much cooked at this point anyway. The goal is to keep people, the voters, sane enough so they don’t elect the crazy politicians. If we keep electing Trumpy politicians, we eventually will succumb to some attack on the system. Having a more sane populace is the first defense against that.

Objection # 2a. “ But the institutions failed us. We can’t trust them.” No, the institutions didn’t fail us. They were tested, some of them failed, but enough of them held. Yes, the Senate should’ve convicted trump the first time, but we can be thankful for the Republicans across the country didn’t pervert the results of the election, despite numerous opportunities to do so. The difference seems that the Senators behave like politicians, who tend to be truth-agnostic. On the other hand, the rest of us, most Republicans or Democrats, the people who count votes and register voters, etc, tend to be less corruptible. So apart from the fact that some political institutions (like the Senate) failed us during some points of Trump’s presidency, I don’t think this means every institution failed us. Trump tried. He tried to overturn the election and it didn’t work, because Truth still holds a lot of value in the US. What we experienced is a reason to build up more institutional protections, not less.

Objection # 3. “Do not use power to suppress opinions you think pernicious, for if you do the opinions will suppress you.”

Here’s where I’m going to have to disagree with the free-speech absolutist when it comes to the notion that people can always figure out the truth eventually. This has been proven demonstrably false over the last decade. The internet has completely failed to help people determine the truth, instead, the internet has given people the conviction to believe whatever they want. Facebook has created completely different reality universes. So the whole idea that the truth will triumph if we just get out of the way does not hold in the social media regime.

The consequences of living in a post-Truth country are obvious. We get Trump and QAnon and attacks on Congress and this doesn’t stop in the foreseeable future.

Taking it one step further

Also, Blockchain, what could possiblye go wrong?

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store