While the European Union (EU) has officially published the final version of the Artificial Intelligence (AI) Act, and the United States (US) is pushing for a sector-specific governance model, to regulate the development and use of Artificial Intelligence, India’s Economic Advisory Council to the Prime Minister has proposed a Complex Adaptive System (or CAS) framework to govern the AI (CAS Framework Proposal). This article outlines the principles of AI governance enunciated in the CAS Framework Proposal and briefly touches upon a few draft legislations, recommendations, and policies in the field of AI.
The CAS Framework Proposal sheds light on five core principles to regulate the use of AI:
(i) Instituting Guardrails and Partitions to ‘Prevent Wildfire’ – This principle emphasizes (i) the need to limit the operation of AI systems within specific predefined technical boundaries to control the unpredictable behavior of the AI systems; and (ii) the creation of different partitions for different AI processes via strict separation protocols and techniques.
(ii) Ensuring Human Control through Manual ‘Overrides’ and ‘Authorization Chokepoints’ – This principle focuses on ensuring human oversight over the behavior of AI. Humans are required to intervene, control, and remediate the unpredictable or non-standard behavior of AI systems. Further, where high-risk decisions are intended to be taken by the AI systems, this principle emphasizes the need for human validation of those decisions by establishing a hierarchical governance process to assess and validate the AI systems’ decisions, and this requires imparting specialized training on AI to those human validators.
(iii) Transparency and Accountability – This principle (i) promotes the use of open-source licenses for AI algorithms, which can be evaluated by external auditors for bias and risks; (ii) emphasizes the documentation—in a uniform format for consistency in interpretation—detailing AI system development (i.e. via coding or learning), logs, data sources, training procedures, performance metrics, and known limitations; (iii) mandates dynamic monitoring through regular audits of AI systems, the disclosure of extreme outcomes, and the use of debugging and monitoring tools to track AI systems’ decisions real-time.
(iv) Distinct Accountability – This principle requires predefining liability protocols in case of AI systems’ malfunctioning or non-standard behaviors. In other words, to ensure accountability, any malfunctioning or non-standard behavior is expected to be attributable to a particular individual or department in an entity, and traceability mechanisms within the AI systems would assist in ensuring safety of all the components and actors involved in the functioning of the AI systems. It also mitigates any negative impact that AI systems’ non-standard behaviors may cause. Further, this principle requires the establishment of incident reporting and investigating mechanisms for AI systems’ failures or non-standard behaviors.
(v) Specialized, Agile Regulatory Body – This final principle focuses on (i) the establishment of a separate and independent expert regulatory body with a mandate to swiftly counter the emerging AI challenges by avoiding red-tapism; (ii) equipping the regulatory body with tools and methods for scrutinizing the AI domain for compliance gaps and/ or any other matter that warrants regulatory attention and intervention; (iii) encouraging the regulatory body to coordinate with academia and industry bodies and take into account any feedback received from them while issuing any directives. Further, this principle specifies the use of real-time monitoring tools to monitor the AI systems’ behaviors against set standards; the adoption of automated systems to notify any potential non-standard behaviors; the establishment of a centralized database for AI algorithms for regulatory compliance and promoting innovation; and the establishment of a national registry of non-standard behaviors, which is necessary to provide feedback and course correct the AI for the regulator.
There have been other recommendations from the AI Taskforce (Report of Taskforce on Artificial Intelligence) NITI Aayog (National Strategy on Artificial Intelligence, Two-Part Discussion Papers on Responsible AI), and the Ministry of Electronics and Information Technology (MeitY) (Reports of the AI Committees). These recommendations focus on seven core principles: safety and reliability; equality; inclusivity and non-discrimination; privacy and security; transparency; accountability; and protection and promotion of positive human values. The CAS Framework Proposal appears to echo the same essence of these seven principles.
Additionally, the Indian government is likely to bring:
(i) Draft Digital India Act – This is intended to replace the existing Information Technology laws, and proposed to operate as holistic legislation expanding its horizon to multiple aspects, including data protection and AI as much as it is necessary to safeguard the users who may be affected using such AI. Also, it advocates for the creation of a separate agency responsible for overseeing the digital domain.
(ii) Recommendations from Telecom Regulatory Authority of India (TRAI) – Besides the above, the TRAI as well has stepped into the framework preparation for AI governance, and advocates for a uniform, sector-agnostic regulatory framework with a risk-based approach for governing AI systems, and an independent statutory authority responsible for developing AI governance guidelines and ethical codes. This appears to closely align with the EU’s approach for AI governance.
(iii) Draft National Data Governance Framework Policy – Besides intending to transform data governance and processing by the government, this policy intends to promote AI and Data-led research and the start-up ecosystem by creating a large repository of anonymized datasets. This is done by creating and maintaining the India Datasets platform which will allow the Indian researchers and startups to access the anonymized datasets.
(iv) Guidance on AI Risk Management – The AI committee of the Bureau of Indian Standards focuses on developing Indian standards equivalent to ISO in the field of AI. The relevant standards are Information technology Artificial Intelligence Process management framework for big data analytics; Information Technology Artificial Intelligence AI Overview of computational approaches for AI systems; Information Technology Governance of IT Governance implications of the use of artificial intelligence by organizations; and Information Technology Artificial Intelligence Overview of Ethical and Societal Concerns. These standards are accessible at cost on the BIS website.
While the principles enunciated in the CAS Framework Proposal may positively assist the Indian government in developing an appropriate framework for AI governance, it remains to be seen how the Indian government intends to govern AI via the Digital India Act or any other new piece of legislation for exclusive governance of the development, deployment, and operation of the AI. It is expected that the Indian government, considering the principles under the CAS Framework Proposal, other draft policy frameworks, and recommendations from various AI experts and stakeholders, may adopt a holistic and sector-agnostic AI governance approach that remains fluid and flexible to the changing AI technological risks and capabilities. Also, it is reasonable to expect that any proposed AI governance approach covers the use of AI not just by the private players, but the statutory agencies as well.
About the author: Sandeep G is an Associate at NovoJuris Legal.