This is a first-of-its-kind multi-stakeholder, interdisciplinary centre to develop responsible AI technologies and frameworks with a ground-up focus on India
Indian Institute of Technology Madras (IIT Madras) has established a Centre for Responsible AI (CeRAI), an interdisciplinary research centre, to ensure ethical and responsible development of AI-based solutions in the real world.
It is geared towards becoming a premier research centre at the National and International level for both fundamental and applied research in Responsible AI with immediate impact in deploying AI systems in the Indian ecosystem.
Google is the first platinum consortium member and has contributed a sum of US$ 1 Million for this Centre.
The Centre for Responsible AI conducted its first workshop on ‘Responsible AI for India’ today (15th May 2023). It was formally inaugurated on 27th April 2023 by Shri Rajeev Chandrasekhar, Hon’ble Minister of State for Electronics and Information Technology and Skill Development and Entrepreneurship, Government of India.
Addressing the inaugural session of this workshop, Abhishek Singh, Managing Director and Chief Executive Officer, Digital India Corporation, said, “I am sure that the deliberations that will happen today (15th May 2023) in this workshop and the various panel discussions will go a long way in helping us evolve our framework, our guidelines and our policies for responsible AI.”
Abhishek Singh added, “AI is playing a major role in all our lives. Whether we know or not, every day we are using AI-based technologies in some part of our life. It is very important that those at the policymaking level and those who are working at the cutting-edge of developing technologies are aware of the risks and challenges that remain while we are using the same technologies for solving societal problems, ensuring access to healthcare, making healthcare more affordable and making education more inclusive and making agriculture more productive… There is a need for non-biased and non-discriminatory AI framework as we have unique requirements that require customization as per our requirements.”
One of the primary objectives of CeRAI will be to produce high-quality research outputs, such as publishing research articles in high-impact journals/conferences, white papers, and patents, among others. It will work towards creating technical resources such as curated datasets (universal as well as India-specific), software, toolkits, etc., with respect to the domain of Responsible AI.
Commenting earlier on the Centre for Responsible AI (CeRAI) coming up at IIT Madras, Sanjay Gupta, Google’s Country Head and Vice President, India, said, “As India’s digital ecosystems increasingly adopt and leverage AI, we are committed to sharing the best practices we have been developing since 2018 when we began championing responsible AI. To help build a foundation of fairness, interpretability, privacy, and security, we are supporting the establishment of a first-of-its-kind multidisciplinary Center for Responsible AI with a grant of $1 million to the Indian Institute of Technology, Madras.”
A panel discussion on ‘Responsible AI for India’ was also held during the workshop. The Centre aims to foster various partnerships and collaborations with government organizations, academic institutions and industries. The CeRAI is actively engaged.
With NASSCOM’s Responsible AI initiative to build course material, skilling programs, and toolkits for Responsible AI, with Vidhi Legal to work on developing a Participative AI framework, with CMC Vellore, to explore areas of mutual interest in the domain of responsible AI, SICCI to help their members better understand the implications of Responsible AI, and TIE to help mentor startups in this space besides RIS, a think tank of the Ministry of External Affairs, Government of India.
Highlighting the need for such centres, Prof. V. Kamakoti, Director, IIT Madras, said, “We have now reached a stage where we have to assign responsibility to AI tools and interpret the reasons for the output the AI gives. Aspects of human augmentation, biased data sets, risk of leakage of collected data and the introduction of new policies besides substantial research must be addressed. There is a growing need for trust to be built around AI and it is crucial to bring about the notion of privacy. AI will not take away jobs as long as domain interpretation exists.”
Speaking about the work that would be taken up in this centre, Prof. Balaraman Ravindran, Head, Centre for Responsible AI (CeRAI), IIT Madras, said, “It is important for the AI model and its predictions to be explainable and interpretable when they are to be deployed in various critical sectors/domains such as Healthcare, Manufacturing, and Banking/Finance, among other areas.
Prof. Balaraman Ravindran added, “AI models need to provide performance guarantees appropriate to the applications they are deployed in. This covers data integrity, privacy, robustness of decision making, etc. We need research into developing assurance and risk models for AI systems in different sectors.
CeRAI will also provide Sector-specific recommendations and guidelines for policymakers. With the achieved research outputs, the centre will help to:
- formulate sector-specific recommendations and guidelines for policymakers
- provide all stakeholders with the necessary toolkits for ensuring ethical and responsible management and monitoring of AI systems that are being developed and deployed
The Centre also plans to create opportunities for conducting specialized sensitization/training programs for all stakeholders to appreciate the issues of Ethical and Responsible AI in a better manner so as to enable them to contribute meaningfully towards solving problems in respective domains. It will hold a series of technical events in the form of workshops and conferences on specialized themes of deployable AI systems with a strong focus on ethics and responsibility principles that need to be followed.