This article uses the AI lifecycle approach (used in AI Governance research) to examine the risks and policy across three stages — Design/ Testing/ Training; Deployment and Usage; and Longer-Term Diffusion. At each stage, effective governance requires three policy goals: creating visibility into AI systems, promoting best practices for safe development, and establishing enforcement mechanisms.
In November 2025, India unveiled its AI Governance Guidelines, with infrastructure as the first strategic pillar. The guidelines offer plans for 38,231 GPUs, databases across 20 sectors and generous schemes for startups. For a developing country like India with diverse population the focus of AI is to promote inclusive development in health, education, agriculture. India’s AI governance framework is unique in the sense that it looks at building and governing AI simultaneously.
India’s framework is guided by seven core principles (called ‘sutras’): Trust is the foundation; People First; Innovation over Restrain; Fairness over Equity; Accountability; Understandable by Design; Safety; Resilience and Sustainability. The “innovation over Restrain” principle is particularly significant as it explicitly prioritises responsible innovation over cautionary restrain thus shaping the framework’s resilience on voluntary measure rather than restrictive regulation. The seven principles are operationalised by six strategic pillars: Infrastructure, Capacity Building, Policy & Regulation, Risk Mitigation, Accountability, and Institutions.
Design, Training and Testing: Building While Regulating
This article analyses how India’s framework both innovates and reveals critical gap. India’s infrastructure first-approach is to build domestic compute capacity and databases and reduce total dependency on foreign AI systems that it cannot independently evaluate or regulate. To promote innovation India has offered host of incentives for startups and AI entrepreneurs such as tax rebates and subsidised loans
In a way, India is creating the material conditions for governance by making available computing resources, standardised evaluation datasets which is essential for scaling adoption in critical sectors such as health, education and agriculture. Such measures also create an enabling environment for fairness testing of AI systems in the Indian context, which is good template for countries in global south. However, building computational capacity alone without robust accountability mechanism risks enabling harm at scale. India’s near short to long term challenge will be understand if its reliance on voluntary compliance can foster innovation without compromising safety.
India’s proposes to use its innovative Data Empowerment and Protection Architecture (DEPA), a ‘techno-legal’ system for permission-based data sharing which integrates data protection principles into digital public infrastructure ensuring compliance by design. Using DEPA for AI training would support privacy-preserving mechanisms and make use of personal data more transparent and auditable.
However, the document highlights trade-off as privacy protection could impact performance loss on certain benchmarks, which could impact utility but the concern might be mitigated by overriding guiding principles of prioritising “innovation over restrain”. The document recommends complementary measures like algorithmic auditing and sector specific regulations for effective AI governance alongside DEPA for AI training.
On creating visibility, India proposes a combination of formal legal enforcement and voluntary compliance . It mandates AI organisations to publish evaluations of risks and harm of AI systems to society and individuals in the India contexts. Further, it tasked AI Safety Institute with testing and evaluation of AI systems though the submission is voluntary. So in many aspects issues of transparency reporting, peer monitoring lack legal enforceability which contrasts with UK framework where model reporting to regulators, third party auditing and audits are mandatory.
Deployment and Usage: the Deepfakes Priority
At the Deployment Stage, India’s priorities become explicit. The guidelines identify six risk categories: malicious uses, bias/discrimination, transparency failures, systemic risks, loss of control, and national security. But resource allocation tells the real story.
Content authentication receives detailed attention: a committee to develop watermarking standards, integration with Coalition for Content Provenance & Authenticity (C2PA) standards, MeitY’s proposed mandatory labelling rules for AI-generated content. Deepfakes are the ‘growing menace’ requiring ‘immediate action.’ The concerns are valid given the complexity and diversity of India’s political economy and the risk of misinformation inflaming public imagination.
There is, however, no pre-deployment capability evaluations. The guidelines is premised on the assumption that voluntary compliance, market incentives and existing laws will suffice for the moment though it is presumed that many of these guidelines will evolve following systematic review of consumer behaviour and any trade-off or social harm thereof.
Long- Term Diffusion: Who’s Vulnerable
The guidelines mention ‘vulnerable group” thirteen times with specific mention of women and children. For children it raises concern of AI affecting mental health, exposure to harmful content and beyond. The risk for women and girls range from them being targeted by harmful AI generated context and explicitly alludes to “revenge porn.” But these concerns would have to be monitored over time and effective guardrails needs to be put in place. Since India is developing LLM in multiple languages, perhaps in built SOS system can be developed that could be linked with human controlled risk and safety control centres like AI helpline. Also in Indian context, the vulnerability may take unintended forms. At one level it might help overcome bias or discrimination based on race, caste or religion but on the other hand a lot will depend on how well the algorithm train the data and how the bias is managed.
Studies show that, unlike in the West, data in India is not always reliable. Sometimes communities are missing or misrepresented in databases. Also, large swathes of rural population, especially women, indigenous tribes, and elders, do not use the internet at all. The issue of the digital divide is enormous. The Fairness & Equity sutra commits to fairness ‘particularly for marginalized communities.’ But without naming which communities, or creating participation mechanisms for them, this remains aspirational.
India’s Path Forward
India’s institutional framework represents genuine innovation. Rather than creating a single AI regulator (expensive, slow to establish), the guidelines propose coordination across existing institutions of AI Governance Group (AIGG); Technology & Policy Expert Committee (TPEC), Safety Institute (AISI): Research, standards development, safety testing and Sectoral regulators. This ‘whole-of-government’ approach leverages existing regulatory capacity rather than building from scratch.
As India prepares to host the AI Impact Summit in February 2026, the guidelines will shape global conversation in non- western context. What India needs in coordination and enforcement mechanism so that its AI regulation is robust and industry friendly and at the same time protects its citizens from unintended harm.
Concrete next steps could include specifying which AI applications require mandatory (not voluntary) safety evaluations, defining ‘sensitive sectors’ requiring human oversight, creating community participation mechanisms in governance structures, and establishing timeline for converting voluntary measures into mandatory requirements.
References:
Ministry of Electronics and Information Technology (MeitY). (2024). India AI governance guidelines. Government of India. https://www.meity.gov.in/
Data Empowerment and Protection Architecture (DEPA). (2020). DEPA: A new paradigm for data empowerment and protection. NITI Aayog. https://www.niti.gov.in/
Bernardi, J., Mukobi, G., Greaves, H., Heim, L., & Anderljung, M. (2024). Societal adaptation to advanced AI. arXiv preprint arXiv:2405.10295. https://arxiv.org/abs/2405.10295
Bai, Y., Jones, A., Ndousse, K., Askell, A., Chen, A., DasSarma, N., … & Kaplan, J. (2022). Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862. https://arxiv.org/abs/2204.05862