A Bill that prosecutes the weak and ignores the powerful

1
A Bill that prosecutes the weak and ignores the powerful
A Bill that prosecutes the weak and ignores the powerful

DR NICHOLAS OKUMU

What You Need to Know

The proposed Artificial Intelligence Bill 2026 in Kenya has raised significant legal and ethical concerns. Critics argue it creates criminal liability for company directors without proper constitutional backing, potentially leading to unjust prosecutions. Furthermore, the Bill fails to address foreign AI companies operating in Kenya, raising questions about equitable governance and accountability.

Africa-Press – Kenya. Two weeks ago, I argued in this column that Kenya’s Artificial Intelligence Bill 2026 was premature.

We had borrowed Europe’s regulatory rulebook and handed it to institutions that do not yet exist, compliance auditors who have not yet been trained and a health sector that has never been asked what AI harms it is actually experiencing.

The right sequence was the one Singapore took: sector-specific governance frameworks first, built from documented local evidence, with legislation following only once that foundation was in place. We were doing it backwards.

That argument stands. But reading the Bill carefully against our constitution reveals a second and independent reason to send it back, one that makes the first even more urgent.

The Bill creates criminal liability through a mechanism our constitution almost certainly does not permit.

Under Section 35, once a company violates the Act, every director who knew about the AI system must prove they exercised sufficient diligence or face personal criminal conviction and up to two years in prison.

Our constitution under Article 50 is unambiguous: the prosecution proves guilt, the accused does not prove innocence. The High Court has already struck down a nearly identical provision in another statute. This one will face the same challenge.

More damaging still is how the Bill defines what is criminal. The commissioner can develop ethical guidelines and update them at any time without returning to Parliament. Violating those guidelines is a criminal offence.

So today you are compliant. The commissioner updates the guidelines. Tomorrow you are not. Our courts have twice in the past decade struck down provisions that attach criminal liability to rules that shift without parliamentary approval. This provision will fail the same test.

Then there is the requirement that AI systems must enhance rather than replace human capabilities. In clinical practice this is undefined to the point of meaninglessness.

Does an AI system that reads bone scans more accurately than a human radiologist enhance or replace? The Cabinet Secretary is supposed to answer that through future regulations. Until then, criminal liability is theoretically possible while the line between lawful and unlawful conduct remains invisible.

The proposed law openly acknowledges it draws from the European Union AI Act. But the EU Act relies entirely on graduated administrative fines. No European hospital director, no health startup founder faces criminal prosecution for a non-compliant AI tool. Kenya added a criminal enforcement layer that even its source never contemplated.

That reflects a fundamental choice flowing directly from the fact that nobody asked what AI harms Kenya is actually experiencing before the Bill was drafted.

When you do not know precisely what harms you are governing, you default to maximum deterrence. Maximum deterrence in a criminal statute is prison.

Ask yourself which provisions carry the harshest of these penalties. Not the ones governing AI in surgery theatres or crop management. The ones governing synthetic media: AI-generated voices, manipulated likenesses, fabricated statements.

The politician who fears a deepfake video more than a misdiagnosis has designed a Bill that reflects those priorities. Kenya’s patients, its smallholder farmers, its students did not shape this Bill. They will live with its consequences.

There is a third structural problem, and it is perhaps the most revealing of all. The proposed law contains no provision explicitly governing foreign AI companies whose systems are already active in Kenyan hospitals. OpenAI’s clinical AI runs in 16 Nairobi clinics.

A Dutch company’s tuberculosis screening tool has processed tens of thousands of X-rays across seven counties. Google is testing AI-assisted maternal ultrasound with Kenyan mothers.

None of these tools is clearly bound by the Bill’s obligations, because the Bill does not explicitly reach providers headquartered outside Kenya.

The EU AI Act applies to any provider placing AI on its market regardless of where they are based. Kenya’s Bill omits that provision entirely.

The result is a framework that could prosecute the developer in Nairobi while the Silicon Valley company whose tool runs in the same clinic faces no obligation whatsoever.

A framework that governs everyone except the most powerful is not governance. It is a burden reserved for those who can least afford to carry it.

Taken together, these three problems do not call for amendments. They call for a pause. A constitutionally fragile law cannot simply be held in reserve until conditions improve.

It becomes active the moment it commences, exposing hospital directors to criminal prosecution for guidelines they cannot interpret, deterring the innovators whose experience should be shaping what good governance looks like and providing a false assurance that the governance work has been done when it has not.

The developer in Nairobi is not asking for exemption from accountability. She is asking for a framework designed with her in mind. That framework begins not with a statute but with a question: what specific harms is AI causing in Kenya’s health system right now? Is it diagnostic bias in tools trained on data that does not reflect Kenyan patients? Automated insurance denials without human review?

Community health apps giving wrong guidance in low-connectivity settings? Each harm has a different regulatory response and a different institution best placed to govern it: the Kenya Medical Practitioners and Dentists Council, the Insurance Regulatory Authority, the pharmaceutical regulator. None of them are mentioned in this Bill.

We should send this Bill back, do that work properly and return with legislation that has evidence behind it and law underneath it. What the developer in Nairobi cannot afford is a law that promises protection while delivering paralysis.

Kenya’s approach to regulating artificial intelligence has been influenced by global standards, particularly the European Union’s AI Act. However, the unique challenges faced by the Kenyan healthcare system necessitate a tailored regulatory framework that considers local contexts and specific harms. The lack of consultation with stakeholders in the health sector during the drafting of the Bill has led to significant gaps in its provisions, particularly regarding accountability for foreign AI providers operating within the country.

Source: The Star

LEAVE A REPLY

Please enter your comment!
Please enter your name here