Yes, a government bureaucracy can regulate Facebook without eviscerating the first amendment.

What The Algorithms do

Before outlining a solution to the problem, I will try to explain how I see the problem: The instant you open your Facebook app, there is a pool of hundreds of posts that Facebook could decide to put on your feed. Facebook has to decide which ones to put in your face, and which ones not to. So how does it do this? My guess is that it gives each piece of content a “you-score”. Maybe that score is tailored to you specifically, maybe it’s tailored to users with some affinity to you, I don’t know the exact underlying algorithm, as I don’t work there, but somehow some sort of you-score number is computed that predicts how well Facebook’s objectives will be satisfied by putting a particular post into your feed. Now an easy first guess of what the score might measure is your probability of engaging with the post itself. Another possibility is that the score may be the expected amount of time that you spend on Facebook after seeing this post. For example, seeing a picture of a family doing happy family stuff might cause you to put down your phone and engage with your loved ones, while for someone else that same picture might cause them to feel a pang of emptiness and propel them on a scrolling journey, searching for something elusive. A political post that utterly infuriates you might inspire you to scroll longer, looking for blood, hoping to query one of your politically opposite friends just exactly who the “sheeple” is now. Again, I don’t know exactly what goes into this decision. I can only follow the incentives. From what I can tell Facebook has two primary incentives as regards any user’s time on Facebook. First, they want you to stay on Facebook for as many hours of the day as possible. Second, they want as much information about you as possible. The more information they have about you, the more they will be able to keep you engaged and online, and more importantly, they can deliver you (or people who behave like you) perfectly placed advertisements, delivering the best possible value to their real clients. Now, this brings up another likely possibility for how they you-score the posts: Predict the probability that you will click through an advertisement that they’re going to show you in about 90 seconds. These sorts of predictions are completely achievable by modern AI, especially considering the trillions of interactions that Facebook is constantly mining. So it’s possible that they show you an article about “the Fed printing money” now, they might follow this up with a post of your brother-in-law on his boat and then, 45 seconds later, hit you with an advertisement to join Coinbase to achieve financial freedom knowing this has a probability of a click-through. This might sound diabolical, but this is a giant corporation with a war chest of billions of dollars and zettabytes of data on human behavior, driven only by the motivation to deliver value to their shareholders. Again, I do not work at a social media company, so I’m not privy to their exact algorithms, so the above is conjecture.

The Regulators

We need a government agency to get involved and regulate. It’s impossible to have a one-size-fits-all statute, because we don’t know exactly this will play out. The regulators should have some flexibility.

But Free speech

Now let’s play through the free speech issue. Fred posts a racist meme. Facebook runs it through their AI, and .003 seconds later determines that showing Steve this post might make Steve more amenable to the LifeLock advertisement they’re about to show Steve, but, because there’s a 99.7% probability that the post will be deemed racist and violent, their algorithm, which was regulated by SMOB, gives this a bad “Steve-score” and instead opts to show Steve a picture of his sister’s dog modeling the new doggie-scarf. Is this a violation of free speech? SMOB did not stop Fred from posting the nasty racist meme. It’s still there, on his page. If you go to Fred’s page, there’s the racist meme. Just like nobody stopped Steve’s sister from posting a picture of her dog. We simply ask Facebook’s algorithm to factor potential racism into the decision about which post to show Steve. We’re not regulating speech at this point we’re regulating commerce because now this involves Facebook monetizing content that Fred made, and congress has every right to regulate this. Again, Fred’s racist meme is still there. It’s not suppressed, it’s not hidden or flagged, it’s just not being used to sell LifeLock subscriptions. If Facebook wants to go the extra mile and remove it, that’s up to them.

Your Objections

Now for objection # 1: “It’s all subjective! Truth is relative.” I don’t buy this. This objection is true, only when interpreted in the absolute. Think about all the judges and juries our court systems are built upon. There’s subjectivity all over the place, but the system works, not perfectly, but it works because we believe in the notion of truth, as something that can be determined. We believe that there is something called a “reasonable person”. We believe in concepts such as “reasonable doubt”. Reasonability is all over the legal system and while this is extremely vague, it’s both flexible and rigid enough that our system works orders of magnitude better than the alternatives. This cynical bullshit that truth can’t actually be determined is at the heart of the Putin/Trump world order. Since we can’t trust truth, we can only trust the strongman.

Taking it one step further

If I were to run the SMOB I would go even further. My feeling is that much of the content, especially political content, is basically junk, even though it might not be explicitly false, or violent. This junk content does not contribute to the dialogue at all, it only serves one purpose, to keep people outraged. This is the real sugary-soda/ nicotine of Facebook. And it too can be quantified — it’s slightly more delicate to do, but it can be scored and minimized. This is something that definitely plagues both sides of the political spectrum — so much of the content is only slightly misleading or based on an appeal to hypocrisy or something that leaves users feeling angry, outraged, and this is the emotional manipulation that keeps users engaged and on Facebook. The 24/7 bombardment of outrage also makes people less amenable to considering the other side of the story.

Also, Blockchain, what could possiblye go wrong?

Facebook is also attempting to create a stablecoin cryptocurrency that Facebook would love to see become a widely used international settlement currency. Noble aspirations to “bank the unbanked” aside, this ability to pair user data with everything that you do with your money in the future (even when you put the app down) is crazy. Blockchains are open and traceable, provided you know which addresses belong to whom. So Facebook would be able to compare which content they show you with your spending habits after they show you the content. I’m not willing to just let them run away with this power, consider how untrustworthy they’ve behaved in the past.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store