It took six months for Facebook to correct the fake news bug that promoted on News Feed.

A bug in Facebook’s News Feed ranking algorithm caused a sudden ‘surge of misinformation’ on users’ News Feed. According to an internal memo report by The Verge, Facebook’s News Feed ranking algorithm instead of suppressing posts from accounts that repeatedly shared fake news gave them more distribution. This led to a spike in views of posts containing misinformation by as much as 30 percent globally.

The report also says that during the time the bug was active, Facebook’s algorithm failed to properly demote posts containing nudity, violence, and “even Russian state media the social network recently pledged to stop recommending in response to the country’s invasion of Ukraine.”

Facebook’s engineers first identified a ‘massive failure’ in the company’s ranking algorithm back in October 2021 but it wasn’t fixed until March 11, until which time it flared up regularly to pump in posts containing misinformation in users’ News Feeds.

What’s worrisome is that Facebook’s internal documents show that the technical issue was first introduced in 2019, but it wasn’t detected until October 2021, which is when it created a noticeable impact. Simply put, the bug had been active since 2019 and it was fixed in March this year.

Upon detection, the issue was internally ranked as a level-one SEV or site event, which is the term that the company uses to describe a high-priority issue. Responding to the matter a Facebook spokesperson told the publication, “We traced the root cause to a software bug and applied needed fixes.” The spokesperson also said that the bug hasn’t had any ‘meaningful, long-term impact’ on the company’s metrics.

It is worth noting that the incident comes on the heels of the company regularly touting the advances that its algorithms were making in downranking posts containing misinformation and other potentially harmful information. The incident also shows how opaque and complex Facebook’s algorithms are even for its own employees. In addition to this, the incident questions Facebook’s decision of downranking posts containing harmful content instead of taking it off its platform. This is something Facebook whistleblower Frances Huagen also highlighted in her testimony before the US Senate last year.

“We’ve seen from repeated documents within my disclosures that Facebook’s AI systems only catch a very tiny minority of offending content…Best case scenario, in the case of something like hate speech, at most they will ever get to 10 to 20%,” she said in the senate last year.

“I’m a strong proponent of chronological ranking, or ordering by time with a little bit of spam demotion, because I think we don’t want computers deciding what we focus on,” she added.

Facebook, on its part, has listed all the posts that it demotes on its platform. But the company has so far failed to elaborate on the impact downranking was having. It’s safe to say that its algorithms aren’t as accurate and as efficient as the company touts them to be and that more transparency and oversight is need to make the platform safe to use.