Photo courtesy of WWD

Meta’s decision to completely dismantle its fact checking programme with immediate effect will result in the accelerated abuse of the company’s technologies. This abuse will invariably lead to the loss of life, adding to the blood already on Mark Zuckerberg’s hands. This reading is neither fatuous nor false and is simply the forward projection of what the Global South has repeatedly confronted in the past and for years before the company’s technology was associated with significant harms, violence and democratic decay in the West.

Mark Zuckerberg’s decision is based on exigencies of US politics and currying favour with the incoming president. Ironically, it was President Donald Trump’s role in instigating 2021’s violent, and unprecedented Capitol Hill insurrection which kick-started Meta’s investments in fact checking. Yet now, new appointments to Meta’s Board, and Global Policy Team signal the company’s ingratiation towards the incoming US administration, already defined by the production, and promotion falsehoods at industrial scale.

For many in the Global South, this expediency is neither surprising nor novel. In January 2019, senior Facebook officials including the Public Policy Director for India, South & Central Asia met with former president Mahinda Rajapaksa in Sri Lanka, causing consternation amongst civil society groups since this was less than a year after the country’s worst ever anti-Muslim riots, given succour by the ruling party’s violent Sinhala-Buddhist nationalism. Facebook was directly implicated in the seed and spread of content inciting hate which gave rise to and helped fan islandwide violence against Muslims and their property. Two were killed and dozens were seriously injured. A year later, the same official resigned because of an article in the Wall Street Journal that noted that she had told other Facebook staff in India that punishing violations by politicians from Prime Minister Narendra Modi’s party, the BJP, “would damage the company’s business prospects in the country”.

India remains Facebook’s largest market and in 2021 the New York Times reporting on a tranche of documents surfaced by Facebook whistleblower Francis Haugen flagged how the company lacked sufficient resources to tackle issues it had created, particularly anti-Muslim content. The same article noted Facebook’s well documented complicity in the generation of violence against the Rohingya in in Myanmar, automated mass subscription to groups in Sri Lanka exposing users to hateful content and in Ethiopia how militia groups successfully used the platform to coordinate violence. The examples are endless and each features significant violence including death and destruction.

Meta’s technology occupies an integral role in our lives and is inextricably entwined in our politics, society, industry, commerce, the arts and activism. The same technology that fuels violence is relied upon by those who bear witness and document for posterity. The recent announcement however will significantly debilitate the use of the company’s products to strengthen democracy, and favour autocracy by enabling powerful politicians and their proxies, who found fact checks inconvenient to promote falsehoods and disinformation, without any meaningful guardrail. Astonishingly, the revised policies on Hateful Conduct delink the instigation of hate and violence online and offline consequences despite over a decade of bloody evidence establishing strong correlation. Zuckerberg’s announcement gaslights academics, researchers and civil society, many of whom have been victims of violence themselves, by suggesting, without any evidence provided that the existing architecture of fact checking is “political biased”, has “destroyed trust” and that a system akin to Twitter/X’s Community Notes in the way forward. This is an unprecedented and fundamental policy reboot presented without any indication around existing capabilities of the company to realise what Zuckerberg says it will now do.

The shift is significant, replacing formal systems and professional fact checking with crowd sourced moderation and informal feedback, a centralised oversight of platform integrity to a more distributed model pegged to motivated users and the sunset of global standards based on rigorous evidence to the subjective presentations of reality. Each of these shifts is an essay in platform entropy.

Professional fact checkers operate within established methodological frameworks, employing systematic approaches to verification that include source triangulation, expert consultation and standardised evaluation criteria. The shift to crowd sourced moderation introduces significant variability in verification standards, potentially privileging popular opinion over factual accuracy. Centralised oversight, despite its limitations, provided a coherent framework for content moderation decisions, enabling systematic tracking of misinformation patterns and coordinated responses to emerging threats. The distributed model, while potentially more scalable, introduces significant variability in enforcement standards and reduces institutional capacity to identify and respond to coordinated disinformation campaigns. The transition from global standards to user generated interpretation introduces significant challenges for platform integrity especially in majoritarian democracies where swarms of users can be employed by the state and autocrats to now stifle inconvenient truths published by investigative journalists or civil society. Individually and collectively these factors will lead to the deterioration of shared facts and realities, accelerating the fragmentation of public discourse into isolated information bubbles that in turn erode social cohesion.

The implications for Global South markets will be profound, and immediate, rolling back to what researchers, academics, policymakers and professional fact checkers have over years worked with Meta to support and strengthen. Historically marginalised groups, identities and communities will suffer the most and it will be far worse than the past given entrenched network dynamics and algorithms aiding the rapid spread of hate and reach of harms. Zuckerberg himself admits that the new moderation framework (or lack thereof) “means we’re going to catch less bad stuff”. What he means is that our lives will not be at risk because of content and commentary the company will not action.

It gets worse.

Meta’s new policy of minimal interference is pegged to maximal use of artificial intelligence and machine learning for content moderation. This will fail. Doctoral research around the instigation of anti-Muslim hate after Sri Lanka’s Easter Sunday terrorism in April 2019 highlighted the widespread use of violative memes to inflame tensions and influence public opinion at scale. Memes require deep socio-political, cultural and linguistic familiarity to decode, with visual presentations that appear humorous often telegraphing significant violence. Despite billions of dollars invested in AI already, there is no technology at Meta that allows it to understand memes and the intent of producers especially leading up to and during times of offline unrest when their production is in the thousands, if not more. Coupled with other local language considerations impacting hundreds of millions in the Global South, the new policy pivot will enable the coordinated seed and spread of harms in ways that escape automated oversight and even manual reporting. Nothing good will result.

We are looking at a near term future where Meta’s platforms are defined by coordinated networks of state-aligned accounts, untrammelled, professional trolling operations with significant resources, the systematic manipulation of platform architectures and sophisticated falsehoods produced at scale. All this will amplify, in an unprecedented manner, what’s already very familiar to Global South human rights activists – coordinated disinformation campaigns, manufactured evidence of alleged wrongdoing including false attribution of violence or criminality and manipulated media targeting minorities. Countries struggling with a democratic deficit will henceforth find Meta aids the decay of human rights far more than it helps stymie autocracy.

I am writing this out of fear, shock, disbelief and anger. I am also profoundly sad. The 19 year-old Zuckerberg’s candid admission about his users being “dumb fucks” now reads less like youthful arrogance and more like a mission statement. As we witness a deliberate dismantling of safety mechanisms that took years to build using our time, labour, insight, research and data, it’s clear Meta’s new policy isn’t just about reducing moderation costs or appeasing MAGA in the US; it is about consciously choosing to be complicit in future atrocities while maintaining plausible deniability. A company that once promised to connect the world has instead chosen to profit from its fracturing, one life, one community, one violent conflict and one genocide at a time.

A version of this article was published by the Centre for the Study of Organised Hate.