If there’s one thing that Facebook has shown time over time is that they have consistently made the wrong choice on how to self-regulate, manage data, and protect those who use their social network. Initially, Facebook’s general goal was to expand users’ connections. The theory was that the more users cultivated their network, the more they could interact beyond their close social circles. This was a great principle to begin with.
Unfortunately, it all comes back to monetization. How does a Meta (Facebook) make its money?
Some social networks make money through subscriptions, job postings, or other methods, most resort to advertising. Facebook, for example, makes the bulk of its revenue through advertising. Somewhere along the line, Facebook turned from connecting users to all their social circles, even their most remote connections… to viciously trying to make a buck off their data and metrics.
Facebook’s Own Research Concludes Facebook’s Service Affects Society
In an internal study published by whistleblower reports to the SEC, Facebook concludes that “we have evidence from a variety of sources that hate speech, divisive political speech, and misinformation of Facebook and the family of apps are affecting societies around the world.”
Facebook’s Algorithm
Every once in a while companies develop processes to maximize their profits. On the surface, there should be nothing wrong with this statement. But is there? Facebook’s algorithm is optimized to pick what keeps users most engaged, with the least effort and push it to the user’s timeline.
It looks like whatever users “like” will appear on their timeline. So, if users don’t like something, it should not appear on their timeline, right? Wrong! It turns out Facebook’s own research shows that it is easier to keep users engaged, for longer periods of time, by showing them content that triggers negative feelings and reactions.
So, if Facebook wants to get more users engaged and increase their interactions to get those users to see more ads, they will choose to negatively influence users for maximum effect.
The User Is The Product
This is a common saying in Social Media, though not all social networks behave the same way. The user is the product in the sense that the user and the information that pertains to the user is the product being sold. This means that your activity, your demographic information, and anything you say within that social network is part of the profile being built around you… and it can be monetized for more targeted ads. Given that Facebook is free to use, this would be considered by many “the price to pay” for the service of helping users to network. But there is an underlying problem with this model. In this case, the user and their profile is not the beneficiary of the service. In reality, the beneficiary or customer is really those who purchase access to the profile in order to advertise to you. You are the product.
Because you are the product and not the customer, you no longer have control over what you see. Your timeline used to be filled with recent posts from your connections… not anymore. Now, your timeline is diluted with ads, targeted posts, and some posts from your network. The dilution is so bad, that you may no longer identify with the content you can see on your timeline.
Nefarious actors can use targeted posts based on any of your profile attributes. This makes the job of those who intend to broadcast misinformation and confusion a lot easier. Rumors, myths, and flat-out lies are capable of convincing those who are less skeptical into a fantasy world of make-belief putting themselves at risk when those myths cover health recommendations. Targeted hate speech becomes more undetectable because it is targeted to specific groups that are just not protected. The product is never as protected as the customer and vendor, much the same way that a rack of ribs has no protections other than what could result in harm to the consumer. The rack of ribs cannot complain to the butcher. It has no say in the matter of whether it is sold or not.
Gaming the Algorithm
Some argue that users see what they react to, therefore there may be a way to “game the algorithm” and only see positive content in their timeline. This argument is aligned with the saying that “the opposite of love is apathy”. In this case, if you don’t love what you see, don’t react to it, hoping for the process to avoid feeding into the timeline. But, even when users react positively and they only “like” content they agree with, avoiding a reaction to the content they dislike, the algorithm still displays content they dislike. This seems to make gaming the algorithm a futile effort.
Those users who engage in liking only family pictures, cat videos, and content they are generally interested in seeing, still get their timeline land-mined with content feeding their anger.
Profit Over Safety
By choosing to show divisive, hateful, and polarizing content, Facebook chooses to put profit over safety. The reactions within social media usually translate to real life in just a few steps. We see society more polarized, more misinformed, more willing to believe crazy conspiracy theories and therefore put itself in more danger: Those are in the form of mishandling of PII (personally identifying information), instigating an insurrection through lies, false preventive measures for a raging pandemic, or false narratives around science facts. All of this is because it is easier (read less expensive, bigger ROI) to make people react negatively than positively.
Safety Over Profit
Even before Facebook had turned, BlueKatana never advertised on Facebook because our services are not aimed at consumers. Now, with the way that Facebook conducts itself and allows data leaks, unauthorized user data metrics (i.e. Cambridge Analytica), and their consumer-abusive processes, we know we are not aligned with and would not choose to be associated with Facebook. Until Facebook becomes accountable for its transgressions, we will not offer Facebook-related services.