Part 1: Ethics & Responsible AI
Ethical Design and AI Meets a Mental Health Crisis
Fourr days after Russia invaded Ukraine, 14-year-old Darya Bondarenko’s life was shattered. She shared her story inside a police station in Bucha, a city that had become a symbol of the war’s atrocities. Sitting in an interview room, speaking through a translator, she began:
"My father, stepmother, and grandfather, and I had just left my Godmother’s house in Bucha. We were in my father’s car, and just as we were pulling away from Godmother’s home, we came across a column of Russian soldiers."
Her father, trying to avoid confrontation, backed the car up and turned onto a side street. What followed was sudden and brutal.
"I don’t know why the Russians did this," she said. "Maybe they got scared, but they started shooting at us. It was machine gun fire."
Her voice shifted—lower, rhythmic, almost mechanical.
"My father was killed instantly. My grandfather covered me with his body. Then the car caught fire. My grandfather jumped out of the car. He told me to come out and hide behind the car. My stepmother was still alive. I could see that her face and chest were covered with blood. I could hear her making gurgling sounds. Blood was coming out of her mouth. Grandfather pulled her out of the burning car and dragged her to a wooden fence by the side of the road. She died there, moments later."
Darya described these events in a flat, eerily disconnected tone. What she was describing had to be as wrenchingly traumatic as anything that could happen to a young girl. Yet as she continued her story during the half-hour interview session, she’d refer to the absolute horrors that she had witnessed almost as if the experience had happened to someone else.
Her emotional distance was a recognizable symptom of dissociation—a psychological survival mechanism in those who experience severe trauma. The prognosis for a person who is severely dissociating is grim. If they don’t get mental health support, typically they’ll self medicate with drugs or alcohol. Too often, they’ll end up with addiction problems that will both damage and shorten their lives.
At the end of the half hour, Darya left the interview room, and the police interpreter, when asked whether newly orphaned Darya would receive the counseling she so obviously and
desperately needed, the interpreter had a disheartening answer: "Unfortunately, there are millions of individuals here who have endured similarly difficult situations. She’s on her own."
The Scale of the Crisis
Stories like Darya’s illustrate why innovative, scalable, and culturally grounded solutions are urgently needed. In Ukraine today, trauma is widespread and generational. An estimated 15 million people live with mental health conditions severe enough to interfere with daily life. The war has not only displaced communities but also fractured the emotional fabric of a nation. And the overriding obstacle to any solution is that mental health professionals are scarce—by some estimates in Ukraine, it’s 11 mental health professionals per 100,000 residents, compared to 267 in the United States.
Darya’s situation–in need of mental health support but unable to get it–is echoed in conversations across Ukraine’s social support systems. Teachers, clergy, local leaders, and first responders all describe escalating emotional strain. Community-based resources are overwhelmed, and informal caregiving networks are stretched thin. The effects ripple far beyond individual trauma, shaping how families function, how children learn, and how communities function economically.
Survivors face layers of loss: not only people, but places, routines, and identity. For many, the emotional wounds compound daily. Nightmares, anxiety, and depression make everyday life even more difficult. Trying to hold a job when burdened with as many as five panic attacks a day becomes in itself a nightmare.
Building Mental Help Global
Awareness of the dire gap between the need for mental health care and the too small number of mental health providers available to meet that need prompted a search for a scalable solution.
Could artificial intelligence provide support when there are no human therapists available? Could AI provide a scalable model for mental health care that doesn’t rely on waiting lists or borders or the ability to pay? Could ChatGPT-type models offer coping strategies, crisis intervention, or referral guidance to millions with no access to care?
These questions were the beginning of what would become Mental Help Global (MHG). General David Petraeus in particular found the concept worth exploring, as did tech entrepreneur Clara Kaluderovic.
Today, Mental Help Global (MHG), with the help of Petraeus and Kaluderovic, is a non-profit initiative headquartered in Kyiv and developed in partnership with five Ukrainian government ministries. It also collaborates with institutions such as the American University Kyiv, Harvard’s Program in Refugee Trauma, Stanford University, the University of Arizona, and the University of Maryland’s Bosserman Institute on Conflict Resolution. Its mission is to use AI to deliver culturally responsive, 24/7 mental health support at no cost.
At first, the focus was on harnessing the latest AI tools to bridge the gap in care. MHG partnered with Ukrainian ministries and academic institutions, ensuring that the system could offer advice and coping strategies relevant to a conflict area. MHG emphasized partnering with Ukrainian professionals to vet resources and adapt clinical guidelines, aiming to make the support as accurate and relevant as possible.
An Ethical Issue Emerges
As MHG began developing and testing the system, a glaring ethical issue emerged: With even the best intentions, there was a real risk of digital colonialism. That is, by relying too heavily on Western models and English-language resources, the AI could inadvertently overwrite or marginalize Ukraine’s unique cultural perspectives on mental health.
This wasn’t just a technical issue, but a profound ethical challenge. The danger was that AI, if not carefully designed, could homogenize cultural identities, perpetuate injustice, and undermine the very communities it was meant to serve.
The Kudzu Problem: When AI Smothers Culture
The deeper the founders of this effort got into the project, the more aware they became that without intentional design, AI risks homogenizing cultural identities. Like Kudzu, the invasive vine that can blanket landscapes and smother unique flora, culturally insensitive AI could overrun national identities with generic, globalized advice, drowning out what makes each culture distinct.
Designers working from Western contexts often face a structural bias toward training the LLMs on Western models of therapy. After all, the documents for training the LLM are in English and readily available.
But what happens when LLMs are trained exclusively from Western models that are in English?
- It Violates Cultural Dignity. Cultural heritage is more than tradition–it’s a people’s identity, memory, and meaning. Erasing cultural heritage through AI systems that ignore or override local languages, beliefs, and practices is a denial of human dignity. Further, a morally repugnant message could be transmitted: your way of life doesn’t matter.
- It Undermines Autonomy and Self-Determination. Moral autonomy doesn’t just apply to individuals; it applies to communities. Societies have the right to shape their own future according to their values and history. When AI replaces or rewrites these frameworks without consent, it amounts to ethical overreach, stripping communities of agency.
- It Perpetuates Injustice. When AI reflects dominant cultural assumptions–usually from wealthy or Western nations–and applies them in a local country, it can displace local knowledge systems, suppress minority voices, and widen power imbalances. Morally, this is a form of cultural injustice.
- It Breaches the Principle of Responsibility. Developers and deployers of AI have a moral responsibility to consider the consequences of their technologies. If AI tools erase cultural heritage–whether by neglect or by design–it reflects a failure of ethical stewardship. A responsible system must amplify, not erase, the diversity of the world it serves.
In addition, and crucially important, culturally insensitive AI systems risk misinterpreting local expressions of distress, misdiagnosing conditions, or offering advice that is irrelevant–or even harmful–in the local context. For example, AI models trained predominantly on Western data may fail to recognize or respect culturally specific coping mechanisms, beliefs about mental health, or linguistic nuances.
This would not only undermine the effectiveness of interventions but could also perpetuate injustice by marginalizing local knowledge systems and reinforcing global power imbalances. This erasure of local knowledge would constitute a form of digital colonialism, reinforcing global inequities and disempowering the very communities the technology seeks to serve.
Recognizing this, MHG shifted its approach. It became a core principle that the Large Language Model’s and the RAG (Retrieval-Augmented Generation) system would be trained and updated respectively with Ukrainian therapeutic practices, language nuances, and war-specific trauma responses.
MHG committed not only to having the program developed and run by Ukrainians, but also to ongoing collaborations with local experts, ensuring that the MHG AI would amplify, not erase, Ukraine’s cultural richness and autonomy. This evolution in thinking became central to MHG’s mission: to provide effective mental health support while preserving national identity and local agency.
MHG became increasingly committed to real-time access to databases relevant to Ukraine. The mental health professionals working on the project are Ukrainian, and they’re committed to using the most relevant and up-to-date information from trusted Ukrainian sources (e.g., clinical guidelines, therapeutic exercises). The responses the AI generates are curated for cultural sensitivity and cultural relevance.
When someone in Ukraine seeks support, the AI will pull tailored mental health advice or coping strategies from vetted resources that reflect local needs and cultural sensitivities. For example, it might access information specifically related to PTSD management in Ukraine’s war-torn environment.
Another important factor is language. A lot of the MHG efforts, in concert with the Ukrainian government, include translating Ukrainian sources so they’re available for the AI training. It’s slower, but it’s also an ethical imperative, and the Ukrainian government has been appreciative of these translation efforts.
MHG’s overall approach envisions a future where AI serves as a bridge, not a bulldozer. It aims to support mental health while preserving national identity, cultural dignity, and local agency.
Looking to the future, MHG seeks to celebrate, as opposed to covering over, the national heritage of any country. It seeks to design a global model that requires that the RAG training is culturally appropriate for each country that uses it, and that nationals from that country are in charge of what’s most precious to that country, its identity.
MHG is being rolled out slowly in Ukraine, with small test groups. Users rate their sessions, and knowledgeable Ukrainian therapists monitor the interactions. Meanwhile, as the effort develops and evolves, MHG is committed to six ethical design principles:
- Train the system on Ukrainian therapeutic materials, not just English-language datasets.
- Employ Ukrainian staff and mental health professionals in development and oversight.
- Use language models that understand regional dialects and cultural references.
- Collaborate directly with local ministries and academic institutions.
- Evaluate all interventions for cultural alignment and potential harm, in addition to therapeutic effectiveness.
- Do everything with humility, recognizing the need for feedback and correction.
Call to Action
Ukraine’s mental health crisis is immense, but it is not unique. As we build AI systems to bridge the gap in care anywhere where there is a gap, we must ensure these systems honor and amplify the voices, values, and lived experiences of the communities they serve. We are working to ensure that technology does not become a new form of colonialism.
If you are a technologist, policymaker, funder, or advocate, join us in demanding that every AI system deployed in humanitarian settings is co-created with local experts, guided by cultural context, and held to the highest ethical standards. Support initiatives that center community engagement and resist one-size-fits-all solutions.
- If you are a technologist: demand culturally rooted AI.
- If you are a policymaker: mandate co-creation standards.
- If you are a funder: support community-based efforts.
Let’s make sure that as we bring innovation to those in need; we avoid the Kudzu Problem: we respect and embrace the cultures we’re involved in, and we use local experts and draw from local mental health resources. The future of mental health support–and the dignity of millions–depends on us getting this right. Individuals like young Darya deserve no less.
© 2026 Mitzi Perdue. All rights reserved.