Part 3: Policy, Regulation & Legislation
AI Policy and Regulation in Healthcare for Developing Countries
1. Introduction
Sunita sat quietly, heart racing, in a small rural clinic in Bihar, India, awaiting the result of her diabetic retinopathy screening. For years, patients like her struggled to get timely eye examinations – often traveling hours or even days to overcrowded hospitals – only to wait weeks for test results. Many lost precious vision, their conditions worsening with every passing day.
Today, however, Sunita’s experience was dramatically different. Within minutes, a revolutionary Artificial Intelligence (AI) system1– developed through collaboration between Indian hospitals and Google’s DeepMind – analyzed images of her eyes, instantly and accurately identifying early-stage signs of diabetic retinopathy. This swift, precise assessment meant Sunita would start treatment immediately, preserving her eyesight and protecting her future.
India's proactive commitment to AI in healthcare started in June 2018. Under its ambitious "AI for All" national strategy2, the Indian government has prioritized healthcare innovation, emphasizing local validation, data security, and patient safety. Initiatives led by influential policy institutions like NITI Aayog underscore responsible AI principles, emphasizing explainability, fairness, and rigorous bias testing to ensure technologies benefit all citizens equitably.
India's National Digital Health Mission3 (NDHM) is further advancing these efforts by crafting a robust regulatory framework explicitly tailored for digital health solutions. Innovative regulatory sandboxes proposed by the Central Drugs Standard Control Organization4 (CDSCO) now provide controlled environments for medical AI startups, enabling rapid experimentation alongside comprehensive oversight. Such strategic steps are paving the way for AI solutions to scale nationwide safely, efficiently, and responsibly.
Sunita’s story illustrates the positive impacts of well-crafted policies and visionary regulation. India's carefully structured approach – balancing innovation, patient protection, and ethical responsibility – is turning AI from a futuristic dream into tangible improvements in healthcare. As Sunita leaves the clinic smiling, she represents the countless patients whose lives and futures are being transformed. Her journey showcases not just technological progress, but the profound human impact of thoughtful governance – unlocking AI’s full potential to serve humanity in India and beyond.
2. Rightsizing Policy and Regulation for Maximum Patient Impact
AI, especially multi-agentic AI, is causing stories like Sunita’s in India to repeat in various other Low and Middle-Income Countries (LMICs) such as Kenya, Brazil, Indonesia, Rwanda, etc. Effectively deploying AI in healthcare presents a critical policy challenge for LMICs. Too little regulation exposes patients and healthcare providers to significant risks. Without adequate oversight, unsafe or untested AI tools can cause medical errors and compromise patient safety, eroding public trust in the healthcare system. Additionally, the lack of clear regulatory standards fosters market confusion, allowing subpar products to proliferate, exacerbating inequities, and widening gaps in healthcare access between urban and rural areas.
Conversely, overly stringent or heavy-handed regulations pose an equally serious risk. Excessive regulatory burdens stifle innovation, discourage investment, and delay the introduction of life-saving technologies into clinical practice. Complex, inflexible regulatory frameworks favor large multinational companies and inhibit local innovators and smaller providers. This further deepens inequalities and reduces agility, limiting the ability of healthcare systems to adapt swiftly to emerging health challenges and technological advancements.
Figure 1 – Effects of Healthcare Policy and Regulations on Patient Impact
See Figure 1 above that shows the intensity and quantity of regulations on X-axis and corresponding impact of the ultimate beneficiaries, the patients, on the Y-axis. The figure illustrates the need of creating a "right-sized" regulation structure - policies designed to strike a balance between protection and flexibility. Right-sized regulation establishes clear, risk-proportionate rules that foster public trust without stifling innovation. Mechanisms such as regulatory sandboxes, similar to India’s CDSCO4, and conditional approvals, similar to Japan’s PMDA13, allow stakeholders to test AI solutions safely, facilitating timely deployment and adaptation to local contexts. By embedding fairness and equity considerations directly into standards, regulators ensure AI solutions benefit all segments of society, especially underserved populations.
Ultimately, balanced regulation enhances patient safety and builds lasting trust in AI-enabled healthcare. Hospitals and healthcare providers confidently adopt validated AI tools; pharma and medical device companies invest sustainably, reassured by predictable governance frameworks; insurance companies rely on AI-driven care decisions with greater certainty; and patients gain timely, equitable access to high-quality care. For LMIC governments, right-sized regulation ensures AI becomes a powerful catalyst for improved health outcomes, addressing pressing healthcare gaps efficiently and sustainably.
3. Stakeholders in LMICs
Rightsizing the policy in LMIC also depends heavily on two important factors - stakeholders and the phase in which each stakeholder exists. Implementing and governing healthcare AI requires collaboration among many stakeholders, from providers and industry to patients and policymakers, each with a distinct role.
Figure 2 – Stakeholders in Healthcare Policy Making
The following are the main stakeholders in making AI strategy and its impact successful in any LMIC:
Hospitals - Hospitals (public or private) are the frontline care providers seeking better patient outcomes and efficiency. Facing staff and resource shortages, they pursue AI to expand capacity and streamline workflows.
Pharmaceutical & Medical Device Firms – These companies develop medicines, vaccines, and health technologies, aiming to innovate and expand markets while improving health outcomes. They use AI to accelerate R&D and partner with tech developers, hospitals, and regulators to bring AI-driven innovations to market safely.
AI Developers – Tech companies, startups, and research labs build AI tools to solve health system challenges at scale. They collaborate with clinicians and policymakers for data access and domain insights, ensuring solutions recognize local contexts, meet real needs, and comply with health regulations.
Insurance Companies – Public insurance programs and private health insurers finance care and aim for cost-effective, quality services while staying solvent. They use AI to automate claims, detect fraud, and manage risk. Insurers work with providers and tech firms on AI that improves patient outcomes and streamlines coverage decisions.
Patients – Patients and communities are the end-users of healthcare, seeking accessible, affordable, and quality services. In LMICs, AI can extend medical expertise to underserved areas, enabling earlier diagnoses; patients support these benefits but also demand privacy, transparency, and fairness in AI use.
Governments – Governments (health ministries, regulators) oversee health systems and strive to improve public health and achieve universal coverage. Many see AI as key to strengthening services and meeting these goals. Policymakers update strategies and regulations to foster innovation while managing AI’s risks to privacy, bias, and safety.
4. AI Governance Maturity Framework for LMICs for Healthcare
Each of the above stakeholders plays a significant role in various phases of AI deployment, policy creation, and its impact. In LMICs, implementing AI in healthcare requires coordination across all stakeholders and a phased approach to policy development. Here, a 3-phase progressive and adaptive framework is proposed for the policy and regulation development and deployment.
Figure 3 – Evolution of Healthcare Policy in LMICs
4.1 Phase 1 – Guidelines and Pilots
This is the first phase and usually lasts 1-2 years. The table below shows the role each stakeholder is needed to play during this phase. This framework is designed to guide stakeholders in ensuring that AI adoption is safe, equitable, and aligned with evolving health policies.
Stakeholder | Stakeholder Role in AI Policy in Guidelines and Pilots Phase |
|---|---|
Hospitals | Pilot AI tools in select clinical areas to gather evidence and build staff capacity. Ensure patient safety and share feedback with policymakers to inform early guidelines. |
Pharmaceutical & Medical Device Companies | Invest in AI-driven health innovations (e.g., diagnostic algorithms or smart devices) tailored to local needs. Demonstrate safety and efficacy through local trials and join early regulatory sandboxes or forums to help shape initial guidelines. |
AI Developers | Focus on designing AI solutions for pressing local health issues, embedding ethics, privacy, and bias mitigation. Participate in pilot deployments with hospitals and adhere to any guidelines or sandbox requirements to prove safety and effectiveness. |
Insurance Companies | Experiment with AI in internal processes (e.g., claims processing, fraud detection) through limited pilots to improve efficiency. Begin drafting basic internal guidelines for AI use and share outcome insights with policymakers, highlighting any observed risks or fairness concerns. |
Patients | Stay informed about emerging AI-driven health services and engage with pilot programs (e.g., AI-assisted telemedicine) when available. Provide feedback through patient groups or surveys to highlight user experience and concerns, helping authorities draft patient-centric guidelines. |
Governments | Define a national strategy for AI in health and publish initial ethical guidelines to steer development. Support pilot projects and digital infrastructure upgrades, and convene stakeholders (providers, industry, patients) to gather feedback for shaping formal policies. |
Most of the LMICs fall into this category. Examples are Ghana5(adopted an AI strategy, launching digital health pilots), Bangladesh6(launched a 2020 AI strategy including health initiatives), and Kenya7(unveiled a 2025 AI strategy prioritizing healthcare solutions and innovation).
4.2 Phase 2 – Regulations and Scale-up
This is the second phase and usually lasts 3-5 years. The following table details the role each stakeholder will play in the second phase.
Stakeholder | Stakeholder Role in AI Policy in Regulations & Scale-Up Phase |
|---|---|
Hospitals | Integrate validated AI solutions into routine care with proper staff training and updated protocols. Establish internal oversight (e.g., an AI ethics board) to ensure compliance with new regulations and share outcome data with authorities for accountability. |
Pharmaceutical & Medical Device Companies | Align new AI-based products with emerging regulatory standards (e.g., data transparency, quality requirements). Provide robust trial data and post-market surveillance reports to regulators. Support healthcare providers with training to ensure safe scaling of these innovations. |
AI Developers | Ensure new models meet standards for data quality, transparency, and validation. Maintain comprehensive documentation and engage with regulators for approvals or certifications, updating algorithms to address bias or errors identified in practice. |
Insurance Companies | Adopt AI tools for underwriting and care management in line with new regulations on data use and fairness. Clearly inform customers when AI influences decisions (such as claim approvals) and ensure human review is available while collaborating with providers to integrate AI in ways that improve cost and health outcomes. |
Patients | Participate in public consultations and patient advocacy forums on AI in healthcare. Advocate for informed consent, data privacy, and accountability in AI tools, ensuring that new regulations safeguard patient rights and equitable access. |
Governments | Establish clear regulations and standards for AI (covering data governance, efficacy validation, and privacy) and set up dedicated units for oversight. Build institutional capacity by training regulators and collaborating with experts, and enforce compliance via transparent approval and audit processes as AI deployments grow. |
Examples of LMICs in this category are Nigeria8(partnered with the Gates Foundation to scale AI health solutions nationwide), Uganda9(scaling a pilot AI platform for maternal health to 100 clinics), and Brazil10 (advancing an AI strategy to drive health innovation).
4.3 Phase 3 – Integration and Innovation
This is the third phase and usually lasts for 5-10 years. The following table details the role each stakeholder will play in this phase.
Stakeholder | Stakeholder Role in AI Policy in Integration & Innovation Phase |
|---|---|
Hospitals | Leverage AI across services for efficiency and quality improvement as standard practice. Continuously evaluate performance, refine clinical protocols, and collaborate with regulators on policy updates using real-world evidence. |
Pharmaceutical & Medical Device Companies | Continuously improve AI offerings within the established regulatory framework, focusing on safety and equity. Collaborate with regulators and healthcare partners to refine standards, ensure quality, and broaden access to proven AI-powered therapies and devices. |
AI Developers | Perform continuous monitoring and auditing of AI systems to uphold high safety standards. Publish transparency reports and share best practices. Participate in industry self-regulation to ensure AI innovations remain trustworthy and compliant at scale. |
Insurance Companies | Fully integrate AI into risk assessment and personalized care programs under strict regulatory oversight. Develop insurance products that cover approved AI-driven services (e.g., telehealth diagnostics) and share data on costs and outcomes with regulators to support evidence-based policy updates. |
Patients | Use approved AI tools (such as wearables or symptom-checkers) to proactively manage health in partnership with providers. Through patient associations, co-design improvements in AI services, and hold hospitals, insurers, and developers accountable for ethical and transparent AI use via feedback and public dialogue. |
Governments | Continuously refine AI governance through periodic policy updates to match technological advances. Foster an integrated ecosystem by aligning with international best practices, promoting data interoperability, and institutionalizing multi-stakeholder collaboration so that AI innovation thrives under strong public oversight. |
Examples of LMICs in this phase are India2 (pioneering AI-driven health innovations from drug discovery to telehealth), China11 (integrating AI, with 90% of hospitals using it in 2025), and Rwanda12 (moving beyond pilots to deploy AI tools nationwide).
It’s important to note that these stages are a continuum – LMICs may find different parts of their health system at different maturity levels simultaneously. For example, an urban hospital network might be at Phase 2 while rural primary care is at Phase 1. Hence, policy frameworks must be somewhat flexible to cater to varied contexts within a country. The overarching principle is adaptive regulation: start with foundational principles and minimal barriers during nascent stages and then progressively build towards comprehensive governance as capacity grows. This adaptive approach prevents both the extremes of regulatory void (with unchecked risks) and premature over-regulation (which could stifle early innovation or lock out beneficial technologies).
5. Conclusion
Sunita’s swift and accurate AI-driven diabetic retinopathy diagnosis at a rural Indian clinic exemplifies the immediate, real-world impacts of AI systems. Yet, millions of Sunitas are still waiting in LMICs.
AI, particularly multi-agent AI systems, offers an urgent opportunity for healthcare in LMICs, delivering immediate and tangible benefits such as accurate diagnostics, early intervention, and optimized resource allocation. Early pilot initiatives addressing urgent health concerns like maternal risk screening, epidemic forecasting, or telemedicine triage are not only saving lives but also building trust and demonstrating AI’s value to policymakers, funders, and communities.
A strategic roadmap with clear phases is essential for a scalable and sustainable impact. This involves developing comprehensive regulatory frameworks, embedding ethical standards early, establishing dedicated Health AI oversight bodies, and training healthcare professionals to responsibly integrate AI tools into their workflows. A phased roadmap—prioritizing immediate, impactful AI use cases in the short-term (1-2 years), formalizing standards and infrastructure investments in the medium-term (3-5 years), and achieving a mature, trusted AI healthcare ecosystem in the long-term (5-10 years) – ensures lasting benefits.
Only through urgent policy action paired with long-term planning can LMICs harness AI to protect patients, achieve equitable outcomes, and transform healthcare.
Endnotes:
1 Performance of a Deep Learning Diabetic Retinopathy Algorithm in India, Brant A, Singh P, Yin X, et al. Performance of a Deep Learning Diabetic Retinopathy Algorithm in India. JAMA Netw Open. 2025;8(3):e250984. doi:10.1001/jamanetworkopen.2025.0984 (https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2831702)
2 India: National Strategy for Artificial Intelligence (#AIforAll) (https://www.niti.gov.in/sites/default/files/2023-03/National-Strategy-for-Artificial-Intelligence.pdf)
3 National Digital Health Mission India, Strategic View 2020 (https://www.niti.gov.in/sites/default/files/2023-02/ndhm_strategy_overview.pdf)
4 Central Drugs Standard Control Organization India (https://cdsco.gov.in/opencms/opencms/en/Home/)
5 Ghana National Artificail Intelligence Strategy 2022 (https://drive.google.com/file/d/1BBOCB6r6qERMt0lzpzGC-fl2yS0aaMTd/view)
6 National Strategy for Artificial Intelligence Bangladesh 2020 (https://ictd.gov.bd/sites/default/files/files/ictd.portal.gov.bd/legislative_information/c2fafbbe_599c_48e2_bae7_bfa15e0d745d/National%20Strategy%20for%20Artificial%20Intellgence%20-%20Bangladesh%20.pdf)
7 Kenya Artificial Intelligence Strategy 2025-2030 (https://ict.go.ke/sites/default/files/2025-03/Kenya%20AI%20Strategy%202025%20-%202030.pdf)
8 Nigeria launches AI scaling hub with Gates Foundation (https://dig.watch/updates/nigeria-launches-ai-scaling-hub-with-gates-foundation)
9 Keti AI: Ugandan Doctor Bridging the Gap in Healthcare Communication (https://www.itnewsafrica.com/2024/07/ai-chatbot-bridging-the-gap-in-healthcare-communication/)
10 The regulation of artificial intelligence for health in Brazil begins with the General Personal Data Protection Law, Rev Saude Publica. 2022 Aug 22;56:80. doi: 10.11606/s1518-8787.2022056004461 (https://pmc.ncbi.nlm.nih.gov/articles/PMC9423092/)
11 China: Healthy China 2030 Initiative (https://www.who.int/teams/health-promotion/enhanced-wellbeing/ninth-global-conference/healthy-china)
12 Republic of Rwanda: The National AI Policy (https://www.minict.gov.rw/index.php?eID=dumpFile&t=f&f=67550&token=6195a53203e197efa47592f40ff4aaf24579640e)
13 Japan PMDA https://pmc.ncbi.nlm.nih.gov/articles/PMC10432865/
© 2026 Paritosh Ambekar, PhD. All rights reserved.