In April 2023, the Minister of Economics, IT and Telecom, Ashwini Vaishnaw, reiterated that the Indian Government would not be enacting legislation to regulate the growth of Artificial Intelligence (“AI”) in the country. This is in sharp contrast with other countries around the globe among which 31 countries have since 2016 passed at least one AI-related bill. Further, the European Parliament has recently approved the EU Artificial Intelligence Act (“AI Act”). The USA and Singapore have also released a blueprint for an ‘AI Bill of Rights’ and a ‘Model AI Governance Framework’ respectively.
The rise of AI regulation is often credited to the unpredictable and uncontrollable nature of AI systems that may jeopardize fundamental rights such as the right to non-discrimination, freedom of expression, human dignity, personal data protection, and privacy. Several countries including Italy, China, Russia, and Iran have banned ChatGPT (an AI-based tool) due to privacy violations. As such there is a need for a human-centric approach to regulate the growth of AI.
With this background, Part I shall discuss the existing regulatory framework which can indirectly regulate AI in India. Part II shall analyze the most comprehensive regulatory framework for AI i.e., EU AI Act and Part III shall discuss the way forward.
Niti Aayog in its 2018 Report noted that while ethical issues stem from the biases that have been programmed into an AI system, privacy concerns lie in the collection and inappropriate use of data for personal discrimination. Therefore, there is a need to attribute liability with respect to an AI system, with a view to ensuring safe AI. The agency recommended that the Government should act as a facilitator and work towards promoting research and application, reskilling and training and adoption of AI. It should ensure responsible AI development by setting up several agencies and centres. The 2021 report of this nodal agency also highlighted the need for soft governance measures with specific legal provisions for AI-based decision-making processes. Similarly, the Ministry of Commerce and Industry recommended that a nodal agency along with data banks and other policies be established for coordinating AI-related activities in India. Further, the Telecom Regulatory Authority of India has also noted the urgency to adopt a regulatory framework for the regulation of AI in India. However, even after such recommendations, the Government has not been very proactive in the field and their approach has been to ‘wait-and-see’.
Currently, AI usage lacks direct codified laws, statutory rules, regulations but is indirectly governed through statutes on intellectual property, cyber-security, and data privacy, etc. For instance, under the pending Digital Personal Data Protection Bill, 2022, AI developers might qualify as ‘data fiduciaries’ as they would potentially collect and process data to train their algorithm, requiring privacy compliance. Further, sections 43A and 72A of the Information Technology Act, 2000 provide compensation for failure to protect data and punishment for disclosure of information in breach of lawful contract respectively [S. 43A & 72A, Information Technology Act, 2000].
The pending Digital India bill also seeks to regulate AI through the prism of user harm in critical fields such as healthcare, banking and aviation, etc. Further, several sectoral laws such as the Telemedicine Practice Guidelines 2020 ban AI-based counseling and prescription. The SEBI (Investment Advisers) Regulations, 2013 also applies to investment advisories that use automated tools such as AI. Therefore, before a new regulation is enacted, there is a need to see whether the sectoral regulations address the changes brought about by the advancement of AI and accordingly if the gap could be filled in the form of amendments.
The European AI Act aims to strengthen Europe’s position as a global hub of excellence in AI by harnessing its industrial use and ensuring that AI in Europe respects its values and rules. The legislation is based on a ‘classification system’ which determines the level of risk that an AI technology could pose to the health and safety or fundamental rights of a person and thereby includes four tiers: unacceptable, high, limited, and minimal. Therefore, based on the risk that AI poses, the Act provides different degrees of checks and balances. For instance, while the Act bans AI that deploys subliminal techniques or exploits the vulnerabilities of a specific group of persons etc, that pose an unacceptable risk [Article 5 of The Artificial Intelligence Act], and requires ex-ante conformity assessment and other requirements for a high-risk AI system [Article 6 of The Artificial Intelligence Act], there exists only transparency obligations for AI that pose a low or minimal risk under the Act [Article 52, The Artificial Intelligence Act].
The legislation mandates informing individuals about AI interactions, including content generation and manipulation. Further, there is a strict requirement under the Act to disclose the data that is being used by the system, to respect the privacy of individuals. The European Commission shall gauge the risk level based on factors such as the purpose, usage, and evidence of harm and reversibility of the outcome generated by such system. However, the Act also identifies AI system used for the purpose of biometric identification, critical infrastructure management etc., as high risk system under Annex III. Further, to ascertain accountability for the high-risk AI system, the legislation imposes that these systems must be overseen by a natural person [Article 14, The Artificial Intelligence Act].
While the human-centric approach of the Act is intended to increase trust in AI, the Act has several drawbacks of its own. Firstly, the definition of AI is itself excessively broad and includes software that can for a given set of human-identified objectives, generate output such as content, predictions, recommendations, or decisions influencing the environments they interact with. Secondly, the risk-based approach lacks clarity thereby, making it difficult to implement because of the complexities involved and the definition covering swathes of technology irrelevant to the Act. Further, it would be daunting for MSMEs who might be burdened with excessive bureaucratic hurdles, therefore, stifling the competition. The AI Act only places responsibility on the initial ‘provider’ of the system and enforces no duties on users who are generating AI content online. Therefore, it fails to take into consideration that AI products are dynamic and their behavior is dependent on the users.
The European regulation shows the growing concerns with respect to the boom of AI-related technology which has already outpaced the existing laws. Several companies around the globe have put a step forward in the corporate AI race and it would be just a matter of time before we would find the use of AI in our daily lives. Although regulation of AI-related technology will be difficult considering the velocity of AI development, the Government should not give the leeway to the companies to govern AI as they deem fit. India being a labour-intensive country, must also be concerned about the potential loss of jobs in low-skilled sectors. Most of the important AI-related regulations in India, are currently in the works, and it is a good time to study the changes in the regulation of AI around the world. As a result, while various nodal agencies in India have emphasized the need for regulation of AI in India, before fastening their belt, India could benefit from their European counterparts.
The growth of AI has the potential to add approximately $500 billion to India’s GDP by 2025. Therefore, it is pertinent to note that any such regulation should not be complex and should clearly state the duties and responsibilities of both the providers and the users. India must harness the power of AI with a comprehensive regulatory framework that encapsulates concerns such as data privacy, algorithmic bias, and accountability. Unlike European regulation, there is a need to regulate the downstream use of AI so that separate accountability exists for the users. Further, the privacy legislation should also regulate the use of publicly available data used by the AI for training purposes. However, such laws should not be cumbersome and excessively bureaucratic.
Regulating AI, a simulation of evolving human intelligence by machines, is challenging, requiring innovative approaches. It is not only the scope and subject matter but also the manner of regulation that requires attention. The Indian Government’s stance has not been proactive, seeing AI as a ‘kinetic enabler’ to harness its potential for better governance. However, India requires a policy that mandates AI operations to undertake continuous iterative risk assessments; and requires them to use only error-free data sets for training. It should also impose a responsibility on such systems to produce audit trails for transparency. Therefore, a flexible interconnected approach to AI regulation is essential for its responsible use.
Debanjan Mandal is the Managing Partner of Fox & Mandal. Mahima Cholera is a Senior Associate and Dipak Verma is an Associate at the Firm.