Internal memos show how a big 2018 change rewarded outrage and that CEO Mark Zuckerberg resisted proposed fixes
In the fall of 2018, Jonah Peretti, chief executive of online publisher BuzzFeed, emailed a top official at Facebook Inc. The most divisive content that publishers produced was going viral on the platform, he said, creating an incentive to produce more of it.
He pointed to the success of a BuzzFeed post titled “21 Things That Almost All White People are Guilty of Saying,” which received 13,000 shares and 16,000 comments on Facebook, many from people criticizing BuzzFeed for writing it, and arguing with each other about race. Other content the company produced, from news videos to articles on self-care and animals, had trouble breaking through, he said.
Mr. Peretti blamed a major overhaul Facebook had given to its News Feed algorithm earlier that year to boost “meaningful social interactions,” or MSI, between friends and family, according to internal Facebook documents reviewed by The Wall Street Journal that quote the email.
BuzzFeed built its business on making content that would go viral on Facebook and other social media, so it had a vested interest in any algorithm changes that hurt its distribution. Still, Mr. Peretti’s email touched a nerve.
Facebook’s chief executive, Mark Zuckerberg, said the aim of the algorithm change was to strengthen bonds between users and to improve their well-being. Facebook would encourage people to interact more with friends and family and spend less time passively consuming professionally produced content, which research suggested was harmful to their mental health.
Within the company, though, staffers warned the change was having the opposite effect, the documents show. It was making Facebook’s platform an angrier place.
Company researchers discovered that publishers and political parties were reorienting their posts toward outrage and sensationalism. That tactic produced high levels of comments and reactions that translated into success on Facebook.
“Our approach has had unhealthy side effects on important slices of public content, such as politics and news,” wrote a team of data scientists, flagging Mr. Peretti’s complaints, in a memo reviewed by the Journal. “This is an increasing liability,” one of them wrote in a later memo.
They concluded that the new algorithm’s heavy weighting of reshared material in its News Feed made the angry voices louder. “Misinformation, toxicity, and violent content are inordinately prevalent among reshares,” researchers noted in internal memos.
Some political parties in Europe told Facebook the algorithm had made them shift their policy positions so they resonated more on the platform, according to the documents.
“Many parties, including those that have shifted to the negative, worry about the long term effects on democracy,” read one internal Facebook report, which didn’t name specific parties.
Facebook employees also discussed the company’s other, less publicized motive for making the change: Users had begun to interact less with the platform, a worrisome trend, the documents show.
The email and memos are part of an extensive array of internal company communications reviewed by the Journal. They offer an unparalleled look at how much Facebook knows about the flaws in its platform and how it often lacks the will or the ability to address them. This is the third in a series of articles based on that information.
Anna Stepanov, who led a team addressing those issues, presented Mr. Zuckerberg with several proposed changes meant to address the proliferation of false and divisive content on the platform, according to an April 2020 internal memo she wrote about the briefing. One such change would have taken away a boost the algorithm gave to content most likely to be reshared by long chains of users.
“Mark doesn’t think we could go broad” with the change, she wrote to colleagues after the meeting. Mr. Zuckerberg said he was open to testing the approach, she said, but “We wouldn’t launch if there was a material tradeoff with MSI impact.”
Last month, nearly a year and a half after Ms. Stepanov said Mr. Zuckerberg nixed the idea of broadly incorporating a similar fix, Facebook announced it was “gradually expanding some tests to put less emphasis on signals such as how likely someone is to comment or share political content.” The move is part of a broader push, spurred by user surveys, to reduce the amount of political content on Facebook after the company came under criticism for the way election protesters used the platform to question the results and organize protests that led to the Jan. 6 riot at the Capitol in Washington.
“MSI ranking isn’t actually rewarding content that drives meaningful social interactions,” Mr. Peretti wrote in his email to the Facebook official, adding that his staff felt “pressure to make bad content or underperform.”
It wasn’t just material that exploited racial divisions, he wrote, but also “fad/junky science,” “extremely disturbing news” and gross images.
In Poland, the changes made political debate on the platform nastier, Polish political parties told the company, according to the documents. The documents don’t specify which parties.
“One party’s social media management team estimates that they have shifted the proportion of their posts from 50/50 positive/negative to 80% negative, explicitly as a function of the change to the algorithm,” wrote two Facebook researchers in an April 2019 internal report.
Nina Jankowicz, who studies social media and democracy in Central and Eastern Europe as a fellow at the Woodrow Wilson Center in Washington, said she has heard complaints from many political parties in that region that the algorithm change made direct communication with their supporters through Facebook pages more difficult. They now have an incentive, she said, to create posts that rack up comments and shares—often by tapping into anger—to get exposure in users’ feeds.
The Facebook researchers, wrote in their report that in Spain, political parties run sophisticated operations to make Facebook posts travel as far and fast as possible.
“They have learnt that harsh attacks on their opponents net the highest engagement,” they wrote. “They claim that they ‘try not to,’ but ultimately ‘you use what works.’ ”
In the 15 months following fall 2017 clashes in Spain over Catalan separatism, the percentage of insults and threats on public Facebook pages related to social and political debate in Spain increased by 43%, according to research conducted by Constella Intelligence, a Spanish digital risk protection firm.
Early tests showed how reducing that aspect of the algorithm for civic and health information helped reduce the proliferation of false content. Facebook made the change for those categories in the spring of 2020.
When Ms. Stepanov presented Mr. Zuckerberg with the integrity team’s proposal to expand that change beyond civic and health content—and a few countries such as Ethiopia and Myanmar where changes were already being made—Mr. Zuckerberg said he didn’t want to pursue it if it reduced user engagement, according to the documents.