writes Mbongeni Jonny Msimanga
Africa-Press – Uganda. Despite the continent’s growing connectivity, many of its languages go unmoderated on social media platforms. This poses important questions about online safety.
As Africa’s digital footprint expands, the continent finds itself dealing with global conversations about freedom of expression, misinformation, and online safety. There is a silent crisis unfolding in the realm of African content moderation. Big tech platforms like Meta, X (formerly Twitter), and TikTok only monitor and police content in eight African languages across a continent of over 1.5 billion people speaking close to 2000 tongues.
This, of course, raises questions about inclusion. But more profound ones about equity, safety, and power in the digital public sphere that is largely controlled by Big Tech companies from outside of Africa.
A continent filtered through eight languages
While Africa has rich linguistic diversity, there are only a limited number of languages in which content is moderated. Currently, checks are only carried out on content in: Arabic, Amharic, Swahili, Hausa, Somali, isiZulu, Yoruba, and Afrikaans.
This means that vast swathes of online conversations from Shona in Zimbabwe to Wolof in Senegal and Gambia or Tigrinya in Eritrea are invisible to moderation tools or left to the mercy of algorithms that do not understand them. As a result, harmful content in these underrepresented languages often goes undetected, allowing misinformation, hate speech, and abuse to thrive. This digital exclusion not only endangers online safety but also reinforces linguistic and cultural marginalisation in African digital spaces. In essence, Africa’s diversity is treated as an inconvenience rather than an asset.
The human cost of linguistic blind spots
The implications of this linguistic bottleneck are serious. Even in the languages that are content-moderated, issues exist. In Ethiopia, content inciting violence in Amharic went unchecked on Facebook during the Tigray conflict in 2020-2022, with Meta admitting after the fact that it had been too slow to act. In Kenya, disinformation in Swahili circulated widely during the 2022 elections, undermining trust in democratic processes.
But what about less “mainstream” African languages? Without moderation in these tongues, hate speech, gender-based harassment, and incitement to violence can spread without consequences. On the other hand, automated tools often over-police African dialects, flagging them as harmful due to unfamiliar grammar or slang, leading to censorship of innocent posts. This digital language gap does not just affect individualism; it threatens democratic participation, public health communication, and local content creators.
However, the geographic imbalance in content moderation is not limited only to Africa. In 2018, in Myanmar, it led to tragic offline consequences and grave human rights abuses. Rather than systematically re-designing their content moderation policies and tools in consultation with local experts and affected communities, the big tech companies have done little more than retroactively hire local content moderators. This has caused a whole host of new problems related to working conditions, psychological trauma, and underpayment. This reactive approach highlights a lack of genuine accountability and long-term commitment to responsible moderation in vulnerable regions. Without structural changes, platforms continue to prioritise corporate image management over the protection of at-risk communities.
Why Big Tech isn’t doing more
At the root of the problem is a lack of investment. Companies have been unwilling to pay for human content moderators fluent in “less mainstream” languages. Training AI models in under-resourced African languages requires extensive data and cultural expertise. But African markets are not seen as profitable enough to warrant such effort. This is unlikely to change unless there’s international pressure or media scrutiny.
Human moderators who understand the nuances of African languages are rare, poorly paid, and often outsourced through opaque third-party contracts. Furthermore, moderation policies are often designed in California boardrooms, far removed from the realities of Johannesburg, Mogadishu, or Lusaka. This disconnect results in moderation decisions that lack cultural sensitivity and often misinterpret context, humour, or political expression. As a result, harmful content may remain online while legitimate speech is wrongly removed, deepening mistrust in digital platforms across the continent.
What needs to happen
Addressing the language gap in content moderation is no longer optional. It is a moral and political imperative. Big tech companies must:
Invest in natural language processing tools for African languages. These investments should go beyond token inclusion and focus on building robust, context-aware models developed in collaboration with local linguists, researchers, and communities to ensure accuracy and cultural relevance.
Work with local universities, linguists, and civil society to build ethical moderation frameworks. These partnerships can help develop ethically grounded, context-specific guidelines that reflect local norms, languages, and social dynamics to ensure moderation practices are both effective and respectful of human rights.
Hire more African moderators with cultural fluency and lived experience. Moderators with cultural fluency and lived experience are better equipped to interpret context, detect nuance, and distinguish between harmful content and legitimate expression, reducing errors that can silence marginalised voices or allow abuse to go unchecked.
Lastly, they can publish transparent moderation language coverage reports so the public knows what’s being prioritised and what’s being ignored. Meanwhile, African governments and digital rights groups must hold tech giants accountable, not just for their presence on the continent, but for their silence and omissions.
LSE
For More News And Analysis About Uganda Follow Africa-Press