Africa-Press – Namibia. Artificial intelligence (AI) is improving journalism, helping reporters work faster by transcribing interviews, cleaning up mistakes, and digging into research. It feels like a breakthrough for press freedom. However, what happens when this powerful tool starts to blur the line between fact and fiction?
Experts caution that if we are not careful, AI could put journalism’s trust and integrity at risk. The question to ponder is; how can journalists use AI to their advantage without losing control?
Emmanuel Mugisha, the Executive Secretary of Rwanda Media Commission (RMC), has called on journalists to embrace digital tools to improve their work but cautioned against abandoning journalistic ethics in the process.
The use of modern technology, including tools that assist in gathering and organising information, can make journalistic work faster and more efficient.
‘Ethics must guide how AI is used’
However, he noted, AI should never come at the expense of accuracy and professionalism.
“Technology is a useful tool for journalists but ethics must guide how it is used. With ethics comes accuracy, factual reporting, and the responsibility to verify every piece of information before it is shared with the public,” he explained.
Relying on computer-generated content without scrutiny could lead journalists to unknowingly spread false or misleading information, he warned.
“If journalists become lazy and stop fact-checking, they risk turning into carriers of misinformation.” Mugisha highlighted a growing need for transparency in how news is produced, urging journalists to disclose when the content they are using is produced or sourced through AI.
“It’s important for the public to know, so they can approach the information with full awareness.”
Mugisha talked about content manipulated to deceive, such as digitally altered images, or videos, which can easily find their way into newsrooms if journalists are not vigilant.
“These days, we see a lot of manipulated content. Journalists must sharpen their skills and ensure they do not become victims or vehicles of falsehood,” he said, explaining that the credibility of journalism lies in primary sourcing and independent verification. For instance, when reporters collect information themselves, they know its source. But if they use content gathered by AI, they must question its reliability.
Mugisha urged journalists to know their legal obligations, particularly around privacy and data protection, noting: “Reporters need to understand what the law says about publishing sensitive information. There are boundaries that must not be crossed.”
He called on media practitioners to learn how to use AI properly while staying true to the core values of journalism.
Philbert Murwanashyaka, the Head of Yali Labs, a Rwandan AI company based in Kigali, said: “Reporters are using AI to transcribe interviews and collect background material. This gives them more time to focus on investigations and essential reporting.”
He said AI can help examine extensive datasets, such as public records or social media content, to identify patterns and connections that may not be obvious at first.
“A journalist working on a detailed story can use AI to find leads or verify sources more quickly,” he said.
Murwanashyaka added that AI should be seen as a form of assistance, not replacement, noting that editorial decisions and ethical judgment must remain with people. Technology should support journalists, not take over their work.
He said press freedom includes control over the tools journalists use, explaining that depending only on proprietary systems can limit that freedom.
Murwanashyaka advised that news organizations shouldn’t rely solely on AI tools owned and controlled by a few big tech companies, which might come with limitations or risks to independence, but rather look into open-source tools, whose code is publicly available, allowing more transparency, adaptability, and local control.
Newsrooms should invest in building their own technical expertise so staff understand how these tools work, can adapt or even develop them, and avoid becoming dependent on outside providers.
Murwanashyaka noted that AI assists in addressing misinformation, with tools capable of comparing public claims against verified sources and detecting manipulated images and deepfakes.
“They can trace where false information is coming from and how it spreads, which helps journalists understand the source and intention behind misleading content.”
However, he advised reporters to use AI with caution, explaining that these systems are not perfect as they can miss cultural context or reflect bias from their training data. He urged journalists to use them as part of the process, not rely on them entirely.
“Readers should know if AI tools contributed to a story. It helps build trust and understanding.”
The goal, he said, should be to strengthen journalism without compromising its values.
Francine Andrew Saro, a reporter and editor at Fezaa.com, a Rwandan digital news platform, said her work mainly involves storytelling and writing. AI supports her by helping reformulate her sentences to make them clearer, more engaging, and grammatically correct.
‘I always review the content it generates’
“AI also assists me in exploring and developing ideas by providing concise explanations on various topics and offering different angles or directions for my stories. To use AI responsibly, I always review the content it generates to ensure it aligns with what I intended. If it doesn’t, I refine my prompts or add more context to improve the results,” she noted.
Saro explained that she never copies and pastes what AI gives her directly, but treats it as a creative assistant.
“I use my own judgment to reshape, enrich, and humanize the content, making sure it reflects my voice and intention. I supervise AI while it supports my work, but I remain in control.”
Olive Ntete, a news anchor at Rwanda Broadcasting Agency (RBA), noted that AI technology can be misused, particularly by those who don’t understand its limitations.
“AI shouldn’t be fully trusted. Journalists need to rely on instinct. AI only gives you what you ask for, it doesn’t think for you. It can improve your work, but it can’t write a full story from start to finish. Your voice matters. You write in your own style, then use AI to polish your copy. It’s there to support, not replace, the journalist,” Ntete said.
In 2023, Rwanda introduced a National AI Policy to make sure artificial intelligence is used responsibly and fairly. The policy sets clear rules to keep AI systems transparent and accountable, so they don’t cause harm or spread misinformation.
Rwanda Utilities Regulatory Authority (RURA) is in charge of making sure these rules are followed. This aims to create a foundation where technology supports good journalism without compromising trust or ethics.
For More News And Analysis About Namibia Follow Africa-Press