AI-powered Indian judiciary: A step forward or cause for concern?

While predicting that artificial intelligence will have expansive participation in case disposal and in streamlining court work, domain experts advocate for the exercise of discretion.
Artificial Intelligence and Indian Judiciary
Artificial Intelligence and Indian Judiciary
Published on
9 min read

The Indian legal system and its stakeholders have been historically loath to adopt technology in their work. But with the advent of groundbreaking developments in the field of Artificial Intelligence (AI) and a shift in mindsets, that trend is likely to change in the near future.

Most recently, the Punjab & Haryana High Court used artificial intelligence (AI) tool ChatGPT while deciding a case for bail. Domain experts feel that this is just the beginning; they opine that AI will have more expansive participation in case disposal and in streamlining the justice dispensation mechanism.

But to what extent can AI be used in our justice delivery system? Can it replace humans in judicial decision-making that impacts real human lives and liberties?

Given the concerns over bias and glitches, the stakeholders of the legal system are advised to be cautious.

AI for transcription of hearings

During a hearing in February related to the political power struggle in Maharashtra, the Supreme Court used AI to transcribe its proceedings that were aired live. A screen displaying the live transcription of the proceedings was placed in the court of Chief Justice of India DY Chandrachud.

A Bengaluru-based company called Technology Enabled Resolutions (TERES), which had earlier provided AI-enabled transcription services to arbitration practitioners, helped the apex court develop this facility.

Lawyer and tech expert Vikas Mahendra was among those who were behind the AI-based transcription tool.  The CJI pursuit of solutions for streamlining the administration of justice led to the Mahendra and his team conceptualising the idea.

Vikas Mahendra
Vikas Mahendra

So that marrying of supply and demand happened quite coincidentally,” he shares. 

Mahendra and his team had been transcribing the sessions of the Delhi Arbitration Weekend held in February this year when the CJI highlighted the importance of transcription. 

We just happened to meet him on the sidelines of the event and tell him that in fact these solutions were already available in India. That immediately caught his attention and he wanted to try it, and that's why on a pilot, the Constitution Bench hearing of the Maharashtra power struggle case was transcribed,” reveals Mahendra.

Mahendra feels that the biggest beneficiary of the transcription tool could be the district judiciary, which sees the highest volume of evidence recording in the country. He is confident that the district judiciary could easily achieve “four to five times” the efficiency with which it is currently functioning.

Coupled with the accuracy factor, transcription can boost case disposals and bring down pendency across courts in India, he opines.

AI to tackle pendency

The Supreme Court’s first brush with AI was exactly a year ago when its Artificial Intelligence Committee launched a portal called the Supreme Court Portal for Assistance in Courts Efficiency (SUPACE). Among other things, SUPACE was aimed at providing digital infrastructure to further the purpose of the digitisation of the court process.

Former CJI SA Bobde
Former CJI SA Bobde

Former CJI S A Bobde, then patron-in-chief of the AI committee, highlighted the objections surrounding the use of AI vis-a-vis judicial decision making and underlined the importance of an independent judicial mind in the form a judge.  

But he referred to SUPACE as a “unique” initiative which was nothing but an collaboration between a human mind and machine intelligence aimed at giving “remarkable” results.

The objections to the use of AI in the Supreme Court system, Bobde said, were “totally unwarranted” as the system was designed to aid a judge with facts in writing verdicts and using their own intelligence with a sense of right and wrong.

Manthan Trivedi
Manthan Trivedi

This AI-powered solution was the brainchild of a team of tech experts led by Manthan Trivedi. The idea was to assist stakeholders in reducing pendency in the judiciary through technological innovation.

But the initial apprehension in accepting and understanding a new technology is palpable. He notes that there is a general fear over the introduction of AI in the judiciary, endangering the involvement of human actors in the process.

"Keeping that in mind — a very far fetched idea to be very honest — we designed a system that was to assist the human,” says Trivedi. 

What AI-powered SUPACE offered was another feature of machine learning that could mimic the behaviour of a person, similar to the algorithms on YouTube, where videos similar to what has been watched are suggested. 

According to Trivedi, the technology was similar to summarising a document or information in a particular format that was deemed worthy of use. 

"In the same way, it would identify the patterns in which one thinks that a certain summary or some content is suitable for use and based on the user, the next time it will create a summary that is more appropriate to the case,’’ he explains.

According to Trivedi, a future driven by AI can see cases of lesser complexity, such as traffic challans, resolved with automation.

People file appeals against decisions where they feel aggrieved or dissatisfied. However, in Trivedi's opinion we may see a drop in appeals in the future with artificial intelligence predicting whether a particular verdict justifies or satisfies both parties.

But can technology really solve the pendency issue?

A recent study conducted by research collective Digital Futures Lab argues that there is a notion of technological interventions taking out the human factors of bias and discretion, therefore, making the system more efficient and guaranteeing more uniform outcomes. However, the same is based on a reductive approach to the complex problems faced by the judiciary. 

The report notes that pendency is described as a phenomenon of “docket explosion”, where the demand for justice far outstrips the supply. Increasing the number of judges, establishing fast-track courts for speedy justice, or reducing the entry of disputes into the formal court system are often suggested to alleviate the issue.

It further points out that empirical studies on the functioning of the courts indicated that the issue of pendency was not an issue of supply and demand, but one of delays. This can be attributed to the culture within courts, the vested interestw that powerful actors have in dragging out cases, delays effected by other law and order agencies, besides broader socio-economic factors that shape the capacity of litigants to attend or delay proceedings.

Moreover, it notices that pendency as a statistic is unreliable in some cases, as a matter of constitutional importance pending before the Supreme Court will impact related matters before subordinate courts, as the practice is to keep those hearings in abeyance till the top court decides the matter.

Biases, glitches and other red flags

Many stakeholders have pointed out that information garnered by AI technology can expose judicial decisions to biases and glitches.  

Biased data sets can lead to skewed outcomes or prejudice against improperly represented groups in these data sets,” the research by Digital Future Labs notes. 

Moreover, legal determinations by automated systems have led to the denial of rights in the recent past. 

A 2020 news report from the US that the research paper refers to showed that automated background checks for renters wrongly labelled people as criminals and sex offenders, denying them the right to housing. 

In Poland, the deployment of an algorithm by the Ministry of Justice for case allocation led to concerns among judges that some of them had been allocated disproportionately more cases than others. In July 2021, the Poland Supreme Court later ruled that the details of the algorithm had to be made publicly available.

Nikhil Narendran
Nikhil Narendran

Technology lawyer and Partner at Trilegal Nikhil Narendran was a part of an international project on responsible AI consisting of 53 lawyers and researchers from 16 countries in 2019. He says that generative AI tools (GAI) such as ChatGPT can be used, as they can significantly improve the productivity of the bar and the bench. However, he advocates the use of discretion.

According to Narendran, several proven instances of “hallucination" by such tools have led to creation of misinformation. He insists on using GAI tools only as a secondary source for research, rather than a primary source. 

Biases and glitches will continue just like the case of any other software that we see now. That’s where the role of the human lawyer comes in to apply the objective test and ensure that the text produced is accurate. We need to also keep in mind that biases exist in individual lawyers and judges and often it is their individual bias,” he pointed out. 

Trivedi, on the other hand, argued that AI inherently does not have a bias. Rather, it was upto the creator to alter and introduce the element of bias in the AI. 

Humans are intrinsically biased, however much we might like to think that we try to stay unbiased. AI is essentially just an augmentation of human capability,” he says.

In his opinion, the solution lies in eliminating biases in the creators of the AI. To this end, he proposes eliminating biases through the kind of data AI is exposed to and re-envisioning training of models. 

"When you are training a model, you actually have supervised learning where you have real professionals who are sort of checking what generative AI is doing."

Some of the recent examples of odd behaviours displayed by large language models and generative AI were logged by Rahul Matthan, a lawyer who has been writing on the intersection of law, technology and society.

Containment of AI is still possible, given that only a handful of companies have the resources to compete right now. Once this changes, as it soon could, it will be too late,” he wrote in a recent article.

On the point of bias, the Digital Future Labs’ report warns of a situation when algorithmic decision-making systems are based on previous human decisions. Further, it is likely that the same biases that potentially undermine human decision-making are replicated and multiplied in these systems, with their identification and resolution becoming even more difficult.

Arpit Bhargava
Arpit Bhargava

For Arpit Bhargava, a lawyer who practices in the Delhi High Court and the criminal courts of the national capital, no AI can ever assist in what he believes is the “best drafting” of pleas that differ from case-to-case.

AI for him is both a boon — to the extent of using time and effort efficiently and organisation of the largely unorganised sector — and a matter of concern.

Judges will have to pen down judgments on the basis of pleadings, evidence and submissions advanced by the lawyers or parties, which cannot be done by AI as facts and circumstances of each case differ,” says Bhargava. 

In order to uphold the popular adage of “justice not only being done but also seen to be done”, the involvement of the human element is essential, he argues.

AI v. humans

The Digital Futures Lab report quite crucially delineates the assurances of senior court and government officials that "disruptive technologies" that replace human decision-making will not be used. It argues that even applications that support case summarisation or information extraction can shift how judges make decisions, particularly if automation bias sets in.

Another possible outcome of a future driven by AI can be a dearth of employment opportunities in the Indian judiciary. 

Mahendra indicated a possible increase in adoption of AI-powered transcription tools for dictation of judicial orders or daily recording of evidence owing to the gradual decline of the shorthand method.

Though he believes in AI’s huge potential for the Indian judicial system, he advices “extreme” caution” when using it. The use of AI in alternate dispute resolution can pay much-needed dividends, he adds.

If most of the court staff's work or clerk's work is handled by AI, especially in the Indian context, where there is no dearth of work force unlike abroad, it will lead to a massive hue and cry. Therefore, the larger question is are we ready to include AI in the justice delivery system?” asks Bhargava.

Trivedi, on the other hand, believes there can never be a time when AI replaces human intelligence. 

Scientific AI has been there since 1956 or then we had AI in Facebook and Google where it would recommend similar things that you like. But now is the advent of generative AI,” he says. 

It won’t be out of place to mention the recent remarks of Justice Hima Kohli of the Supreme Court. Although she flagged issues of "fairness, bias and protection of civil liberties” in the implementation of AI, she also underscored that it was “here to stay”.

Bar and Bench - Indian Legal news
www.barandbench.com