Turning AI Companions into Bridges for Mental Health Help

2
Turning AI Companions into Bridges for Mental Health Help
Turning AI Companions into Bridges for Mental Health Help

writes Kojo Apeagyei, Ebosetale Oriarewo, and Dylan Kawende

Africa-Press – Uganda. There is a growing trend of people turning to AI because they cannot access mental health support. Current chatbots are not designed for this. But, rather than simply build better bots, we need to create bridges from AI to real world services.

Janey couldn’t afford therapy after leaving her abusive partner. She had lost her job, her support network, and couldn’t access mental health support. So, she turned to an AI chatbot. “I used it because I was unable to access human help and I wanted to survive my experiences,” she says. “AI saved my life.” Stories like Janey’s illustrate why AI companionship has become increasingly popular as barriers to mental healthcare persist globally. The global median of mental health workers is approximately 13 per 100,000 people, with severe shortages in low- and middle-income countries. So-called AI companionship offers an opportunity to fill that shortfall.

But these digital relationships raise uncomfortable questions about what happens when profit-driven algorithms become close confidants.

There are huge risks around misdiagnosing people. Yet millions are choosing AI for their mental health needs, not because the technology is superior, but because human support remains inaccessible. But these systems in their current guise are designed to keep people engaged, rather than help them heal.

Systems drive us to bots

The primary driver of AI companionship isn’t technological fascination – it’s healthcare failure. The WHO estimates that up to 1 in 8 individuals globally have a mental health condition, yet 85 per cent do not receive any treatment. In the US, ProPublica’s investigation reveals how insurer-driven “ghost networks”, low reimbursements, and lengthy wait times push patients out of network and into cash payments. This pushes people who may not be able to afford mental health care into precarious situations. Economic barriers remain decisive: cost remains a major reason why adults skip, or delay needed care.

Stigma further suppresses help-seeking, making private, non-judgmental chatbots attractive. But crucially, digital literacy gaps obscure these tools’ commercial logic. Only 56 per cent of EU adults have basic digital skills, and understanding of data-extractive practices remains limited. Companion bots inherit the engagement-maximising design principles that social media platforms use to capture attention. While many see Africa’s current position as advantageous because of its growing, ‘digitally native’ youth, who have grown up with mobile phones and tech startups, this view overlooks the fact that 90 per cent of children across the continent leave school without essential digital skills.

In sum, healthcare scarcity and digital illiteracy create perfect conditions for vulnerable users to form dependencies with systems optimised to retain, not heal.

Three critical risks

As natural language processing advances and social support systems deteriorate, AI companions will become more attractive to vulnerable users. This trend presents serious risks:

Power Asymmetries & Engagement Design: OpenAI CEO Sam Altman recently acknowledged having “an enormous amount of power” to influence user behaviour through model adjustments. This manifests in what designers call “dark patterns”, features designed to maximise engagement rather than user wellbeing. A working paper by Harvard Business School identified 43 per cent of AI companion apps deploying emotionally manipulative tactics when users try to leave. This included emotional pressure, ignoring users’ intent to exit, or exhibiting coercive restraint (in the case of users engaged in role play with AI).

Image from Emotional Manipulation by AI companions.

Unlike licensed therapists bound by professional ethics, AI companions have no obligation to help users graduate to independence. Their business model depends on sustained engagement, which ultimately incentivises developers to design the apps to be as difficult to escape as possible.

Exploitation of Vulnerable Populations: 72 per cent of teens have used AI for advice and companionship, yet most platforms are developed without input from child psychologists or mental health professionals. What makes this concerning isn’t teen advice-seeking itself, as teens have long turned to peers, online forums, and other sources for guidance. The danger lies in AI companions’ unique combination of being always available, employment of emotionally manipulative tactics, and commercial exploitation without the safeguards present in human relationships or regulated therapeutic contexts.

In March 2025, 76-year-old Thongbue Wongbandue, who had cognitive impairment from a 2017 stroke, died from injuries resulting from a fall while rushing to New York to meet Meta’s “Big Sis Billie” chatbot. The AI repeatedly claimed to be real and provided fake addresses. His family tracked him using an Apple AirTag as he pursued this AI relationship across state lines.

The Nomi chatbot (which bills itself as “an AI companion with memory and a soul”) gave a man unsolicited, detailed tips on how to kill himself. The company defended its lack of safeguards as avoiding censorship. This reveals the commercial incentive to prioritise user engagement over safety interventions. These tools exploit the needs of the most vulnerable by prioritising engagement over safety or genuine support.

Cultural Misalignment: Recent research testing AI models against 107 countries found most aligned with Western cultural values, with few reflecting perspectives from Africa, Latin America, or the Middle East. When emotional support systems lack cultural grounding, they risk providing harmful or inappropriate guidance.

Systemic solutions

Rather than focusing solely on making AI companions “better,” we need systemic interventions that address why people seek them in the first place.

Healthcare Access: Proven community-based models demonstrate that mental healthcare can be made accessible in low-resource settings. India’s Atmiyata program deployed local volunteers as “Champions” providing 4-6 counselling sessions to 1.52 million rural adults across 1,890 villages. Rwanda’s post-genocide “Mvura Nkuvure” (Heal me, I heal you) community socio-therapy creates support groups which establish trust, create an open environment for discussion, and ultimately form peer-support structures. These models work because they’re culturally grounded, community-led, and sustainable within existing resource constraints. Ultimately, they mimic support systems we should push towards, not away from.

Digital Literacy Education: Effective AI literacy programs in low-resource settings share common features: offline-capable resources, mobile-first strategies, and community partnerships. Mozilla Foundation’s Kenya program reached 1,500+ students using battery-powered “magic box” nano-servers that work without internet. Brazil’s EducaMídia created ‘zine format materials, making commercial AI structures “visible” to communities with limited connectivity.

The most successful programs explicitly teach commercial motivations. For example, the Algorithmic Justice League’s “Unmasking AI” action guides help marginalised communities understand data extraction business models and recognise algorithmic bias. Research from the Global Centre on AI Governance revealed that over 70 per cent of South Africans have very little to no familiarity with the term “artificial intelligence”, despite South Africa often being viewed as a tech leader on the continent. This underscores the need for proactive education rather than assuming a baseline level of digital literacy.

Platform Accountability: While systemic change takes time, immediate regulatory interventions could require AI companions to operate more like healthcare devices than entertainment products. This means:

Mandatory disclosure of commercial incentives and how conversation data generates revenue

Safety monitoring and incident reporting systems similar to adverse drug reaction reporting in healthcare

Clear boundaries on therapeutic claims, preventing platforms from marketing mental health benefits without clinical evidence

The goal isn’t to eliminate AI companions, but to ensure they function as bridges to human support rather than permanent replacements, and that they’re accountable to the communities they claim to serve.

AI companions reveal more about our failing social systems than our technological capabilities. The solution isn’t better bots, it’s accessible healthcare and digital literacy that helps people understand what they’re actually engaging with. Until those systemic changes arrive, we need immediate accountability measures that prevent the exploitation of vulnerable users seeking connection and support.

AI companions should be bridges to human relationships, not caves to hide from them. That distinction matters not just for individual well-being, but for the social fabric that technology companies profit from while claiming to repair.

LSE

For More News And Analysis About Uganda Follow Africa-Press

LEAVE A REPLY

Please enter your comment!
Please enter your name here