AI and the Future of Work in India
By: Omir Kumar, Policy Analyst, Centre for Responsible AI, IIT-M and Krishnan Narayanan, Co-founder and President of itihaasa Research and Digital
The 2024-25 Economic Survey has called on Indian policymakers to pay attention to AI’s impact on labour markets. The Survey notes that the increasing capabilities of AI systems, coupled with decreasing costs, threaten the existing workforce. Globally, the ILO estimates that about 75 million jobs are at complete risk of automation due to AI.
This is especially pertinent for countries like India which is a services-led, labour-surplus economy.
The low-value-added services jobs are the most vulnerable to automation as companies may substitute labour with technology to bring down costs. A survey of white-collar workers in India by IIM Ahmedabad (2024) highlighted that 68% of the surveyed employees expect their jobs to be partially or fully automated by AI within the next five years. Additionally, 40% believe AI will make their skills redundant. The Economic Survey observes that if such projections hold good, they could veer India’s economic growth trajectory off course.
The Economic Survey calls for collaboration among policymakers, the private sector, and academia to leverage AI for societal good while minimizing disruptions. It emphasizes investment in education and workforce skilling, supported by institutionalizing
three types of mechanisms - enabling, insuring, and stewarding. Enabling institutions equip the workforce with necessary skills, insuring institutions provide support to displaced workers, and stewarding institutions balance public welfare and innovation.
Let us build on the ideas from the Economic Survey and explore what AI regulations and policy making should focus on.
Policy Nudges for Broadbasing AI Skilling
The rapid evolution of AI, as a technology, mirrors historical patterns where new technologies initially benefit only a few who possess the necessary skills, while the majority takes time to catch up. James Bessen, author of “Learning by Doing,” is an economist who has extensively studied the impacts of automation on employment. He
emphasizes that the implementation of new technologies requires more than just the invention itself; it requires widespread learning and skills development.
In the context of AI, this implies that the focus should not solely be on the development of advanced AI models, but also on ensuring that a broad base of individuals can acquire the skills to use, adapt, and implement this technology effectively. This includes promoting vocational training and on-the-job experience to allow workers to gain the practical skills they need. This corresponds to the “enabling” function identified in the Economic Survey. Let us examine some interesting ideas which policy makers should carefully assess.
Bessen stresses the importance of knowledge sharing and collaboration for successful technology adoption. Government policy and regulations in the field of AI should thus focus on fostering open models, and open innovation by promoting the exchange of technical knowledge. Restrictive intellectual property laws that limit the mobility of workers or the diffusion of knowledge should be avoided / repealed.
The rapid evolution of AI presents additional challenges because the technology is still evolving quickly, which could result in a lack of standardisation. This creates challenges for the labour market, which has difficulty setting prices for labour, especially since the skills might be specific to a company or a small group of firms. Policies that encourage the development of standards are needed to allow labour markets to efficiently function.
Bessen highlights that the benefits of new technology are not automatically distributed equally. The early stages of AI adoption may lead to increased inequality if only a small group of highly skilled workers can leverage the technology. Therefore, government policy and regulation need to ensure that the economic benefits of AI are more widely shared. This means developing robust labour markets that value and reward the skills needed to work with AI, not just those who invent the tech. This would include training programs and certifications that help workers acquire and demonstrate these skills. Government procurement can also play a role by establishing high-quality standards for new technology and promoting skills development.
Finally, Bessen suggests that policy should adapt to the specific stage of technology's development. In the current early-stage of AI development, policies should prioritise flexibility, open innovation, and broad-based learning. A singular focus on formal education, such as college degrees, is unlikely to be sufficient. Instead, policy needs to balance support for formal education with vocational training, on-the-job learning, and knowledge sharing.
The emergence of "digital workers" or AI assistants, which can learn faster than humans, presents both an opportunity and a challenge. While these tools can accelerate the adoption of AI, it is crucial to understand that humans still have a role to play, particularly in areas that require tacit knowledge and adaptability. The interplay between humans and AI workers in the factories of the future will have to be studied deeply. This brings up the question of “automation Vs augmentation.”
Nudging Markets to Focus on AI Augmentation
What is augmentation? It is about using AI to enhance and complement human capabilities. For instance, rather than an AI system replacing a teacher, it augments the educator by analysing data on each student’s strengths and weaknesses and creating customized lesson-plans in real time. A recent Anthropic study on Claude.ai found that 57% of tasks were augmented. It noted that when AI augments rather than replaces, productivity improves while maintaining meaningful human engagement.
However, markets may not always create solutions that prioritize augmentation over automation. Daron Acemoglu and Pascual Restrepo point out that when one technological paradigm is ahead (in this case, automation), markets tend to follow it even when the alternative (AI augmentation) is more productive. Further, they observe that innovation may also be driven by cultural and other factors, which prompt companies and researchers to choose automation over other alternatives.
To drive markets towards augmentation in India, the government should take proactive steps through targeted policy measures.
Private companies, with a focus on the bottom-line, may focus only on automation solutions. Public-private partnerships (PPPs) can direct innovation and funding towards augmentation-based solutions. The recently established Centres of Excellence in AI should be leveraged to foster such partnerships.
The government may consider offering financial incentives such as subsidies and tax exemptions for building augmentation-solutions in certain priority sectors/projects. For instance, building agri-chatbots for farmers can assist them by providing information and guidance on aspects of agriculture like crop management, pest control, weather updates, and optimal planting times.
Lastly, any AI development framework or strategy by the government should incorporate the principle of "human in the loop of AI systems" in the design, development, and implementation of AI solutions.
Stewarding Institution in India
According to the Survey, these institutions need to be flexible, spanning multiple sectors and staying current with developments to spot both opportunities and risks. They must balance public welfare with fostering innovation while promoting transparency and accountability to ensure social acceptance of AI. Given the role outlined above, we believe that the India AI Safety Institute (AISI) can play the role of a stewarding institution. While the India AISI is still at a preliminary stage, international experience suggests that safety institutes are equipped to play the role of a stewarding institution. We discuss this below -
AISIs across the world have focussed on advancing AI research through technical research. This includes research and testing around evaluations, foundational AI safety research, and efforts to build AI safety as a whole. By staying on top of all the latest developments, AISIs are best positioned to identify both risks and opportunities.
Most AISIs have been set up under government departments. This has contributed to their agendas being strongly guided by public welfare. The International Network of AISIs Mission Statement also emphasizes “on ensuring safe, secure, and trustworthy AI benefits all of humanity”. At the same time, all AISIs closely engage with the industry enabling them to strike a balance between balance between innovation and societal good. Note that recent developments in the UK and US seem to suggest a tilt towards AI innovation and development. However, based on the PM’s address at the AI Summit in Paris, India is still focused on striking a balance between public good and AI innovation.
Lastly, AISIs establish standards by developing guidelines and protocols around transparency and accountability. Their approach varies from creating flexible procedures, as seen in Japan and the US, to setting more formal standards that inform regulatory processes, similar to the EU's model.
The Authors would like to thank Professor Sudarsan Padmanabhan, Department of Humanities and Social Sciences, IIT Madras for his input. You can contact Omir Kumar at omir@cerai.in.


