Making AI Self-Regulation Work in India
Paper published by Amlan Mohanty, Associate Research Fellow at CeRAI
The Centre for Responsible AI (CeRAI), IIT Madras, has published a paper titled “Making AI Self-Regulation Work – Perspectives from India on Voluntary AI Risk Mitigation” authored by Amlan Mohanty, Associate Research Fellow at CeRAI.
The paper explores the evolving landscape of AI governance in India with a focus on self-regulatory approaches. It provides a conceptual overview of AI self-regulation, sentiment analysis across different stakeholders in India, and recommends a policy roadmap to facilitate its effective implementation in India.
This paper follows another paper by Amlan Mohanty titled “India’s Advance on AI Regulation”, published by Carnegie India in November, 2024.
The paper published by CeRAI was formally launched on April 15th with special remarks from Abhishek Singh, Additional Secretary, Ministry of Electronics and Information Technology, and Balaraman Ravindran, Head, CeRAI.
This was followed by a roundtable discussion featuring government officials, industry leaders, academics, and civil society, which provides a way forward to operationalise a set of voluntary AI commitments for India.
Key findings of the paper
The paper demonstrates the growing consensus around self-regulation as a key pillar of AI governance. In India, an expert committee constituted by the Principal Scientific Advisor and convened by MeitY has endorsed self-regulation in the form of voluntary commitments to foster trust and transparency in the AI ecosystem. There is also broad support for AI self regulation within industry, though scepticism exists amongst some civil society representatives.
The paper defines self-regulation, identifies certain pre-conditions, explains its goals and limitations, and provides examples for effective implementation.
Based on such analysis, the paper makes the following key recommendations.
Develop a Risk-Based Classification: The paper calls for a structured approach to classify AI use cases based on various risk factors. This would help determine which applications should be subject to self-regulation. This framework should draw on empirical data, real-world cases, and input from diverse stakeholders to reflect the Indian socio-cultural context.
Ensure Government Involvement: Active government participation is essential for effective self-regulation. The paper urges the government to be involved in initiating, developing, and endorsing voluntary frameworks based on alignment with national priorities and regulatory objectives.
Introduce Market Incentives; To encourage adoption of voluntary frameworks, the paper recommends financial, regulatory, and reputational incentives, such as linking self-regulation to public procurement, grants, and regulatory sandboxes. It also stresses the need for frameworks to be accessible and practical, particularly for smaller firms.
Adopt Accountability Measures: Given the lack of legal enforceability of voluntary codes, alternative accountability measures need to be implemented. Organisations should be encouraged to publish transparency reports, update their platform policies, adopt self-certifications and international standards, monitor actions of industry peers, and conduct audits where feasible.
Provide Institutional Support: Institutional support is essential for sustaining AI self-regulation efforts. The proposed AI Safety Institute for India can play a pivotal role in guiding industry initiatives, developing benchmarks, and promoting the widespread adoption of AI safety tools. Additionally, a Technical Advisory Council may be established to provide expertise to government agencies, facilitate risk assessments, and support compliance efforts. In the long term, exploring the feasibility of industry-led Self-Regulatory Organisations can help create accountability within sectors.
You can read the paper here.