Last November, Mark Zuckerberg famously dismissed accusations that Facebook affected the results of the 2016 U.S. Presidential election by allowing “fake news” on the platform in a lengthy, post-election essay.
A brief portion below (emphasis my own):
“After the election, many people are asking whether fake news contributed to the result, and what our responsibility is to prevent fake news from spreading. These are very important questions and I care deeply about getting them right. I want to do my best to explain what we know here.
Of all the content on Facebook, more than 99% of what people see is authentic. Only a very small amount is fake news and hoaxes. The hoaxes that do exist are not limited to one partisan view, or even to politics. Overall, this makes it extremely unlikely hoaxes changed the outcome of this election in one direction or the other.
I wrote about this topic at length in my previous post, Facebook and Post-Truth.
Things have changed since November.
Facebook, Twitter, and Google will be appearing before Congress to testify on Russian interference in the election, revealing over 3000 Russian-paid political ads promoting ideologies across the spectrum.
Trump tweeted that Facebook has always had an anti-Trump bias:
— Donald J. Trump (@realDonaldTrump) September 27, 2017
(Though, he did not seem to mind this “bias” when he explicitly stated Facebook helped him win. *shrugs*)
Zuckerberg, countering Trump’s claim of bias, wrote another personal essay, stating his idealistic vision of the Facebook community:
Every day I work to bring people together and build a community for everyone. We hope to give all people a voice and create a platform for all ideas.
Trump says Facebook is against him. Liberals say we helped Trump. Both sides are upset about ideas and content they don’t like. That’s what running a platform for all ideas looks like.
I appreciate Zuckerberg’s insistance on maintaining Facebook’s universal appeal, but I question the humanity-focused motivations he desperately wishes to put forth. The world’s largest social media platform certainly has an interest in maintaining as many eye-balls as possible. The simple formula: more users + targeted ads = profit.
Facebook’s advertising algorithm has recently come under intense scrutiny, and for good reason. Independent news site ProPublica discovered hateful phrases like “Jew-hater” or “how to burn Jews” were easily promoted to almost 2,300 people. Facebook-owned photo and video sharing app, Instagram, ran a Facebook ad featuring the phrase “I will rape you” after a user uploaded a screenshot of the harassing comment.
Clearly, advertising algorithms need some work.
But the problem isn’t with their function; their sole purpose is to put content in front of people who will connect to it. Facebook’s entire business model focuses on matching content with their users’ interests, almost too accurately, in ProPublica’s study.
I, personally, prefer to view headlines and ads that show me information I am already interested in. I am also aware of how quickly that innocent “preference” becomes an ideological bubble, a bubble in which advertisers can show increasingly specialized content in hopes I might “click here.”
Social networks do not have any real financial interest in serving up balanced ideological ads or articles. We pay for “free” services like Google and Facebook with our eyeballs and our clicks. Unless one intentionally seek out websites and people who hold different opinions than their own, what they see on their Newsfeed is not the full picture. The algorithms make sure of it.
Note: The night before I finished composing this piece, Mark Zuckerberg posted a brief apology to mark the final day of the Jewish holiday, Yom Kippur. Full quote below: