[Vidhispeaks] Charting a separate course - Why India must establish its own thought leadership on AI governance

AI has increasingly piqued the interest of lawyers, regulators, policymakers, and civil society given its perceived high risk high reward nature.
Vidhispeaks
Vidhispeaks
Published on
5 min read

By Ameen Jauhar

In the 1970s, India was treated as a pariah by the international community when it came to sharing of nuclear technology, given our then recent and unprecedented success with nuclear armaments. For a world order clearly forged in the post war bipolarism, it was difficult for India, and any other newly independent states to realistically stake a meaningful role in global politics. Consequently, India has been at the receiving end of many international conventions and treaties which have repeatedly impeded its domestic considerations. Yet, to integrate a fledgling economy into this new economic order, the country has had to compromise certain local interests, to ensure homogeneity with international values, beliefs and ideals.

Fast forward to more contemporaneous encounters, India is firmly establishing itself as an Asian democracy with over a seventh of the world’s population, and rapid economic growth - a country in the Global South that the North cannot afford to sideline. This significantly improved positionality is also reflected in India’s increasingly vocal and somewhat defiant diplomatic stances, at disparity and arguably to much chagrin of some western nations. Be it the ongoing negotiations around seeking a TRIPS waiver, or resolutely walking a diplomatic tightrope on the Russia-Ukraine conflict, India is establishing itself to be an alternative perspective to global policy and political challenges.

Reminiscent of international discourse witnessed during the forging of the nuclear non-proliferation movement, and subsequently around issues of climate change, are current debates around regulation of artificial intelligence (AI).

AI has increasingly piqued the interest of lawyers, regulators, policymakers, and civil society given its perceived high risk high reward nature. Over the past decade, AI, through the spate of data processing techniques, has exponentially escalated from a few recommender algorithms underlying on YouTube or social media, to now being deployed across sectors like health, education, agriculture, urban planning and mobility, and even the justice system.

Its ubiquity has bolstered ambition, yet also been a cause of concern, simultaneously. Increasingly, there are reports from researchers and AI ethicists which flag the serious risks of bias, discrimination, opacity, and exacerbation of deep rooted societal prejudices, that have triggered some legitimate concerns around unbridled adoption of AI and intelligent algorithms.

Even preceding conversations around AI governance, there has been a growing chorus voicing concerns about its predominant developers, infamously referenced as “Big Tech”. The threat perception around technology behemoths like Meta, Google, Amazon, Microsoft and a few others, dominating the development of sophisticated, cutting edge AI systems, is a real worry to many governments.

This apprehension stems from a larger distrust of Big Tech that has been spurred by scandalous instances of data breaches, antitrust practices, and abuse of dominance which have afflicted this sector, in India, and internationally. There are also concerns around the immense power and influence these corporations wield against even sovereign actors of the state, like the bureaucracy and judiciary. Consequently, Big Tech is currently facing regulatory crosshairs across the globe.

AI takes a particular position of concern in this discourse, around reigning in tech giants, and making the overall sector more accountable to states and citizenry. A global trend of developing ethical charters, principles for converging AI deployment and human rights, and implementing some soft touch measures, has become quite visible. Over 160 such frameworks are presently in play, yet the obvious gap to many dabbling in this discourse, is the lack of real, meaningful regulation and enforcement of these ethical credos. This lacuna has served as impetus to a growing call for concrete regulation of AI which would mitigate its risks and maximise the benefits.

Like with its data protection regulation (GDPR), the EU is seemingly making painstaking efforts to assume thought leadership around the ethical and safe deployment of AI systems. This process has culminated in the proposal for an AI Act which was presented in the European Parliament last year, and is likely to be passed into binding legislation before the end of this year.

Briefly put, the proposed AI Act seeks to create a risk based regulatory framework which would put higher onus of compliance standards on proportionately high risk AI systems. Interestingly, this law also takes a conclusive position favouring an omnibus statute in lieu of sectoral or use specific regulation, akin to how the GDPR works for data protection.

Contrary to the EU’s stance, the United States appears to be targeting specific forms of AI usage, foremost, algorithms that are being used in recruitment processes, the criminal justice system, and predictive policing. The objective here is also preservation of core constitutional values, but with a more focused regulatory framework. Similarly, China’s AI regulation is largely targeting recommendatory algorithms which have been viewed as challenging outlets to the State’s dominance, and its sovereign functions.

And with these and a handful of other nations, there is officially, not merely an AI race, but a race for thought leadership around AI governance and regulation. Being the largest AI market (greater than that of China and the US), India is poised to have an unprecedented global pedestal to bring its own ideas and principles to this international brainstorming.

For Indian regulators, it will be crucial to truly understand how issues like AI biases, fairness, digital divide, and other risks, mean for our own populace, its aspirations, and social realities. Furthermore, while AI is being used by private corporations and entities, the dominant user of AI and algorithmic tools is likely to be the government and its agencies, as is evident from the National AI Strategy published by NITI Aayog in 2018. Any regulatory framework that is to be developed must consider the state’s use of AI and the risks it poses, present the requisite checks and balances needed, state liability standards, and exceptions to the rules. Future legislation cannot create sweeping exemptions for state usage in avenues like predictive policing and surveillance, while creating a stringent compliance regime for the private sector, which was one of the biggest criticisms of the proposed data protection bill.

Having our own perspectives to notions of fairness, transparency, accountability, and trustworthiness of AI systems will allow India to not merely be a participant in the global discussions, but meaningfully participate in, and contribute to the process. Simultaneously, it will provide a much needed alternative to continental and American constructs of tech regulation. These in turn will arguably be better suited for guidance, adaptability and development of similar regulatory frameworks with other Asian, and the larger Global South countries.

Ameen Jauhar is Team Lead, Centre for Applied Law & Tech Research at Vidhi Centre for Legal Policy.

Vidhispeaks is a column on law and policy curated by Vidhi. The views expressed are of the author, and do not reflect the views of Vidhi or Bar & Bench.

Bar and Bench - Indian Legal news
www.barandbench.com