Tuesday, April 7

Standing With Science in a Staff-Scarce Health System: The Promise of AI


Standing With Science in a Staff-Scarce Health System: The Promise of AI

Image Source: Getty Images

This essay is part of the series: World Health Day 2026: Standing with Science in an Age of Shared Risk


World Health Day 2026 is framed by the World Health Organization (WHO) as a year-long campaign that promotes scientific collaboration as a precondition for credible policy and collective action. It positions public trust as a determinant of health outcomes, and places One Health at the centre of contemporary risk, linking people, animals, plants, and the planet. In India, this framing invites the question of whether the health system is developing the institutional capacity required to translate scientific advancement and evidence into service delivery at scale, particularly in contexts where workforce constraints remain binding. The Ministry of Health and Family Welfare’s Strategy for Artificial Intelligence in Healthcare for India (SAHI), released in February 2026, provides a useful peg for this discussion because it attempts to articulate governance conditions under which health AI (artificial intelligence) can be deployed without undermining safety, equity, and trust.

SAHI also extends an earlier policy arc, with NITI Aayog’s 2018 National Strategy for Artificial Intelligence placing healthcare among priority sectors under the ‘#AIforAll’ framing and explicitly connecting adoption to access, affordability, shortages, and inconsistency of skilled expertise. While the 2018 strategy was necessarily broad, oriented to national capability and enablers across sectors, SAHI is introduced at a later stage, when deployments are no longer hypothetical and AI is being expanded beyond controlled pilots. While much of the global discourse on artificial intelligence centres on displacement and labour substitution, India’s public health system operates under a markedly different constraint: persistent and documented workforce shortages. In this context, the relevant policy question is not whether AI will replace health workers, but whether it can augment limited specialist capacity, improve triage and workflow efficiency, and extend clinical reach to underserved geographies without diluting accountability.

In this context, the relevant policy question is not whether AI will replace health workers, but whether it can augment limited specialist capacity, improve triage and workflow efficiency, and extend clinical reach to underserved geographies without diluting accountability.

SAHI proposes a structured governance framework for the development, evaluation, procurement, and deployment of artificial intelligence within the health sector. It adopts a risk-based approach to AI use in healthcare, emphasising proportionate regulation depending on intended use and potential harm. SAHI consciously departs from the language of technological optimism and instead anchors AI within the routine processes of health governance. It argues that tools must be built for specified clinical tasks, evaluated in the environments where they will actually be used, and introduced with transparency about both capability and limitation. It further treats AI systems as dynamic interventions whose performance must be observed over time, particularly in heterogeneous and capacity-constrained settings. By aligning adoption with regulatory pathways and national digital infrastructure, the strategy seeks to ensure that AI becomes part of the health system’s institutional architecture rather than just an external technological overlay. Instead of positioning AI as a standalone technological solution, SAHI situates it within broader health-system strengthening, with explicit attention to trust, accountability, and patient safety as preconditions for scale.

Staffing Scarcity as a Given, and AI as a Force Multiplier

Any practical roadmap of AI in Indian healthcare must begin with workforce scarcity and system capacity. The Partnership for Health System Sustainability and Resilience (PHSSR) India report (2024) describes a workforce struggling with an acute shortage of skilled staff, both general and specialists, at all levels of services and across states. Apart from a recruitment deficit, it is also a distribution and skill-mix problem that manifests through limited clinical time, uneven supervision, variable team continuity, and constrained referral pathways. These features shape the feasibility of any intervention, including digital ones.

AI will not resolve staff shortages by itself, and it cannot replace clinical responsibility. Its most plausible contribution is to act as a force multiplier, making scarce specialist capacity more available to underserved areas by reducing avoidable workload, improving triage, and enabling more structured task-sharing.

This is also where the potential contribution of emerging technologies needs to be framed precisely. AI will not resolve staff shortages by itself, and it cannot replace clinical responsibility. Its most plausible contribution is to act as a force multiplier, making scarce specialist capacity more available to underserved areas by reducing avoidable workload, improving triage, and enabling more structured task-sharing. In practice, this means using AI to standardise screening and prioritisation, to support frontline providers in routine decision pathways, and to improve the efficiency and safety of remote consultations that connect peripheral settings to specialist hubs. The risk, particularly in a system under staffing stress, is that assistive tools will be treated as substitutes for missing expertise and deployed without sufficient safeguards. SAHI is important because it attempts to pre-empt that drift by tying adoption to evaluation, monitoring, and accountability rather than to demonstration effects.

Embedding AI into India’s Telemedicine Scale

The strongest near-term case for AI in India lies in tasks that map cleanly onto everyday workflow decisions: screening support, triage, documentation structuring, and decision support that remains advisory. Tuberculosis (TB) screening using computer-aided detection (CAD) software for chest radiographs illustrates both the opportunity and the governance challenge. WHO has recommended CAD for TB screening since 2021 and continues to update technical guidance as products evolve. In India, CAD has been deployed over the last few years in contexts where radiology expertise is scarce, by providing a consistent triage layer that helps prioritise confirmatory testing and referral. However, performance is shaped by threshold setting, device and image-quality variation, local prevalence and case mix, and the integrity of follow-on pathways. These determinants are implementation factors and they sit largely outside the model. SAHI’s insistence on lifecycle governance and post-deployment monitoring aligns with this reality.

Clinical decision support in high-volume primary care and telemedicine settings represents another pathway through which AI can ease staffing constraints. The most credible value proposition is not automated diagnosis, but inputs such as structured history-taking, protocol prompting, and documentation support that can improve consistency and reduce omissions in routine care. The research community has treated workflow integration and human factors as central to safety for precisely this reason, and DECIDE-AI was developed to improve reporting and evaluation of early-stage clinical AI systems in real-world settings. If decision-support tools are to genuinely strengthen the system, they must operate within clear referral pathways and supervisory arrangements that ensure clinicians retain responsibility and that patterns of errors are identified early.

The most credible value proposition is not automated diagnosis, but inputs such as structured history-taking, protocol prompting, and documentation support that can improve consistency and reduce omissions in routine care.

India’s telemedicine scale has implications for the immediate future of AI deployment. eSanjeevani reported more than 45.9 crore consultations as of March 2026, with service availability across all states and union territories. A Press Information Bureau release in February 2023 reported a peak of more than half a million consultations in a single day and stated that the system had been augmented to support more than one million consultations a day. Even after allowing for variability, these figures imply a weekly caseload in the millions. At this scale, the operational question is no longer whether AI belongs in telemedicine, because eSanjeevani already includes an AI-enabled clinical decision support layer, but whether such tools can be embedded in routine workflows in a way that measurably reduces avoidable workload, improves referral decisions, and extends specialist oversight to underserved settings while remaining auditable and clinically accountable.

What SAHI Contributes under the “Stand with Science” Frame

SAHI’s risk-based and lifecycle view of health AI treats trust and accountability as prerequisites for scale. This direction aligns with WHO’s ethics guidance, which foregrounds transparency, accountability, inclusiveness, and protection against harm and bias. It aligns with WHO’s regulatory considerations as well, which emphasise documentation, validation, change control, and post-deployment monitoring rather than one-time assurance.

The key challenge is whether SAHI’s governance logic becomes institutional practice in a system shaped by workforce scarcity and uneven capacity. If ‘standing with science’ is taken seriously, progress should be judged by whether AI is being used in routine care in ways that can be independently examined over time, especially where specialist access and system capacity are most constrained. SAHI provides a serious framework for that shift, and its credibility will be established by whether it shapes routine behaviour across programmes, states, and the public-private interface, particularly where the system level gaps are most glaring.


Oommen C. Kurian is Senior Fellow and Head of the Health Initiative at the Observer Research Foundation.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *