Responsible Computing: The Next Frontier of AI Education and Workforce
Overview of a recent talk by the head of the centre Prof. Ravindran, by Policy Analyst Omir Kumar
Recently, Dr Balaraman Ravindran, Head of the Centre for Responsible AI (“CeRAI”), gave a talk on "Responsible Computing: The Next Frontier of AI Education and Workforce" at the Mozilla Responsible Computing Challenge (RCC) Global Conference. The talk touched upon key questions on AI, such as the reason behind the AI hype, the future of work, tackling the harm of AI systems, and the role of AI safety institutes. This blog seeks to provide a broad overview of the talk.
Democratized access to AI and associated risks
While AI technologies have existed for quite some time now, the reason behind the AI hype in the past few years has been democratizing easy access. The success of ChatGPT isn’t just about its capabilities—it’s about how easy it is for anyone to use. For the first time, a very advanced AI tool is in the hands of the public. But the flipside to this story is how the majority of people may not understand the risks that such systems pose which may lead to misuse. Today AI is being in critical areas such as courtrooms, hospitals, banks, etc, where the impact of AI can have wide-ranging (adverse) effects on society. For instance, an AI-based tool that was used in the US to predict whether someone would re-offend ended up discriminating against black defendants.
Building safe and responsible AI
Given the widespread use of AI systems in various aspects of our daily lives and their potential harms, it is important to advocate for building safe & responsible AI systems that prevent harm. There are some key principles that are core to responsible AI. These include fairness, explainability/interpretability, transparency, robustness, security, privacy, and accountability. There have been numerous instances where AI systems have gone wrong or have been misused. From ChatGPT giving out facts that do not exist, AI systems discriminating against black people to deepfakes being circulated on the internet, numerous instances establish the need for safe and responsible AI.
Making AI safe is not easy
Building safe AI isn’t as easy as it sounds. There are several considerations to it. Firstly, defining terms like fairness, ethics, expandability, etc is not easy. For instance gender - while it is a key attribute that should be taken into account while designing health systems it may not be important in the context of an AI system that gives out loans to people based on their credit score. There is an urgent need to have clear standards to address these issues, but that’s tough because different experts use different terms concerning AI safety, making it hard to agree on solutions. What these questions tell us is that formulating standards and definitions for responsible AI can’t have a one-size-fits-all approach. It will have to be a multidisciplinary sector-specific approach.
The story does not end with making safe AI systems. An AI model is like an engine that’s just one part of the car. To make the car safe, you need to have other guardrails like brakes, airbags, etc. Similarly to make AI systems safe you need to have other measures in place like output monitoring and filters, complete and accurate datasets, post-deployment evaluation, feedback mechanisms, appropriate standards and laws, etc.
Role of AI Safety Institutes
AI Safety Institutes will play an important role in advancing safe and responsible AI. Note that several countries like the UK, the US, Japan, and Singapore have set up AI Safety institutes. Consultations on establishing an AI Safety Institute in India are ongoing. Dr Ravindran suggested that these institutes should serve as knowledge hubs, help guide research, and advise governments and regulators on AI policies.
Bridging the gap between tech experts and policymakers
While there is a need for policymakers to come up with regulations that seek to prevent harm from AI systems, a key point raised in the talk was the disconnect between tech experts and policymakers. For example, when the Indian government issued an advisory requiring the use of watermarks to identify AI-generated content, many AI experts said it wouldn’t work well. The advisory was withdrawn in a few days. This shows the need for better communication between those who build AI and those who make the laws.
Future of education and work
We are witnessing increasing capabilities of AI systems - ranging from coding to generating art and music. This has sparked conversations around the future of education and AI’s impact on the nature of work. Dr Ravindran suggested that while AI will play a significant role, there will be a need for humans to have a basic understanding of their respective fields. In the future, kids may not need to learn the exact syntax of coding, but they will still need a strong grasp of core programming concepts. This foundational knowledge will help them think through problems and effectively prompt AI tools to write the code for them. Similarly, while AI can create art, you need an artistic mind to identify what is good and unique art. There is a future where humans and AI work together to not only improve efficiency and quality but also to make discoveries.
Need to have a holistic understanding of AI Safety
We need a broader, more thoughtful approach to AI safety. It’s not just about fixing immediate problems. We also need to consider the wider social, cultural, and economic effects of AI, especially in a diverse country like India. For example, while the US might focus on biases related to race, in India, we also need to think about caste, religion, and language. We need a multi-disciplinary approach to understanding what is responsible use of AI.
You can contact Prof. Ravindran at ravi@dsai.iitm.ac.in and the author of this article Omir Kumar at omir@cerai.in.