The TV mini-series Class of 09 tells a thrilling story about the impact of Artificial Intelligence (AI) on the criminal justice system in the USA. In the final episode, the AI wrongly accuses a civil rights advocate of a crime because of her anti-AI opinions in an unpublished book, which the AI deemed to be a threat. During the trial, the advocate argues that human judges have little authority to decide her guilt or innocence because AI calculations are considered sacred by the judiciary. Although the current scenario doesn't rely on AI for decision-making, it could be a possibility in the coming years.
Today, AI is unsettling nearly every profession and industry. However, the usage of AI in the legal sector in India at the moment is being limited to automated contract review, legal research, transcription services, etc.
The article explores the advantages of integrating AI technology into the legal field, while also examining its impacts, the regulations overseeing its application, and the possible obstacles that may surface.
Potential benefits of AI in the legal profession
Law Firms & Lawyers
The development of AI technology provides an opportunity for lawyers to improve their efficiency, reduce costs and focus on more strategic work. AI can handle mechanical and routine tasks like document and contract review, legal research and data analysis. This can ultimately lead to increased productivity and profitability for law firms. However, AI is not yet capable of handling more complex tasks such as deal structuring, negotiation, advocacy and representation in court. As a result, the use of AI may decrease billable hours for law firms. While larger firms may have the means to implement AI systems, smaller firms may struggle to keep up with the cost of technology and remain cost-effective.
Indian Judiciary
Since 2021, the Supreme Court has been using an AI-controlled tool designed to process information and make it available to judges for decisions. It does not participate in the decision-making process. Another tool that is used by the Supreme Court of India is SUVAS (Supreme Court Vidhik Anuvaad Software) which translates legal papers from English into vernacular languages and vice versa.
In the case of Jaswinder Singh v. State of Punjab, the Punjab & Haryana High Court rejected a bail petition due to allegations from the prosecution that the petitioner was involved in a brutal fatal assault. The presiding judge requested input from ChatGPT to gain a wider perspective on the granting of bail when cruelty is involved. However, it is important to note that this reference to ChatGPT does not express an opinion on the case's merits, and the trial court will not consider these comments. The reference was solely intended to provide a broader understanding of bail jurisprudence when cruelty is a factor.
Usage of AI in the judiciary: A comparative analysis
USA
AI-powered tools such as COMPAS (Correctional Offender Management Profiling for Alternative Solutions) assist judges in risk assessment by analyzing factors such as criminal history, social and economic background and mental health to predict the likelihood of recidivism. The US Sentencing Commission also utilizes AI to create and enforce sentencing guidelines for fair and just punishment.
The US court system utilizes chatbots to offer answers to frequently asked questions about court procedures, schedules and other related subjects to the general public. This helps lessen the workload of court staff and enhances accessibility of information for everyone.
China
China's Smart Court system aids judges with AI technology that can analyze past cases and suggest applicable laws and precedents. It can also recommend sentences based on similar cases, allowing judges to make informed decisions and deliver justice quickly.
Chinese courts use AI for legal research. The 'China Judgements Online' platform, powered by AI, allows judges to quickly find relevant legal documents.
United Kingdom
The UK Ministry of Justice introduced the Digital Case System in 2020 for the crown courts. It offers real-time case updates and remote court participation and allows for the digital submission of evidence to reduce paper usage. The Bar Council's Ethics Committee provides guidelines for criminal law barristers accessing the online portal.
Legal framework to regulate AI: Global and Indian perspectives
AI has many potential benefits for society, such as improving healthcare, education, transportation and entertainment. However, AI poses challenges and risks, such as ethical dilemmas, privacy violations, bias, discrimination and security threats.
To address these challenges and risks, a global group of AI experts and data scientists has released a new voluntary framework for developing artificial intelligence products safely. The World Ethical Data Foundation (WEDF) has 25,000 members, including staff at tech giants such as Meta, Google and Samsung. The framework contains 84 questions for developers to consider at the start of an AI project.
However, with a surge in usage of AI, there is a growing need to have exclusive legislation for the regulation of AI, for eliminating in-built or acquired bias and to address ethical concerns while using it.
There are white papers, guidelines and policy in jurisdictions such as UK, USA and EU which target algorithmic impact assessment and elimination of algorithmic biases. The European Parliament recently adopted amendments to its proposed Artificial Intelligence Act. The amendment proposes to include a ban on the use of AI technology in biometric surveillance except for law enforcement, subject to judicial authorization, and for generative AI systems like ChatGPT to disclose AI-generated content.
Indian Perspective:
Currently, there are no specific laws in India with regard to regulating AI. Ministry of Electronics and information Technology (MEITY), is the executive agency for AI-related strategies and had constituted committees to bring in a policy framework for AI.
The Niti Ayog has developed a set of seven responsible Ai principles, which include safety & dependability, equality, inclusivity and non-discrimination, privacy and security, transparency, accountability and the protection and reinforcement of positive human values. The Supreme Court and high courts have a constitutional mandate to enforce fundamental rights including the right to privacy. In India, the primary legislation for data protection is the Information Technology Act and its associated rules. Additionally, the Digital Personal Data Protection Bill has been introduced by MEITY, although it is still awaiting formal enactment. If this bill becomes law, individuals will have the ability to inquire about the data collected from them by both private and government entities, as well as the methods utilized to process and store it.
Risks and challenges in the context of the legal sector
Confidentiality and data privacy
AI systems generally rely on large amounts of data to learn and make predictions. Such data may include sensitive information, such as personal or financial data. AI algorithms that require this type of data to train effectively may create problems for organizations to comply with data protection laws.
Bias in AI systems
Potential bias in AI systems whilst training can reflect in the outcome. The results from AI can simply reflect current social, historical imbalances stemming from race caste, gender and ideology, producing outcomes that do not reflect true merit.
Licensing and questions regarding accountability
AI systems, unlike trained attorneys, do not have to acquire a license to practice law and therefore will not be subject to ethical standards and professional codes of conduct. If an AI system provides inaccurate or misleading legal advice, who will be responsible/accountable for it? The developer or the user?
The usage of Ai in the judiciary also poses a problem even if judges retain ultimate decision-making authority. It is not uncommon to become overly reliant on technology-based recommendation due to automated bias.
As per a recent news report, a New York lawyer used ChatGPT for legal research and included six case citations in a brief filed with the court. However, opposing counsel could not find any of the cases, and the lawyer had to admit that he didn't independently confirm their legitimacy. The judge imposed sanctions on the concerned lawyers and their law firm was fined to pay $5,000 in total. Therefore, lawyers should be cautious when using generative AI for legal research.
Concerns regarding competition
It is possible for AI to operate independently of its coders or programmers through its self-learning capabilities. However, this could potentially result in technological and economic disparities that have yet to be fully examined. Such disparities could lead to the misuse of data and potentially disrupt the framework established by the Competition Act, 2000.
Establishing accountability for technology-related errors in the legal field can be a challenging task. The implications of errors made by AI systems shall have huge ramifications affecting the life and liberty of individuals. However, proactive measures can be taken by legislators and industry experts from legal or other fields to set clear lines of responsibility and to ensure accountability when using AI in their practice.
It is important to remember that AI is not a replacement for lawyers' work, rather it should complement it. While AI can simplify tedious and time-consuming tasks, it cannot handle strategic decision-making, complex legal analysis and legal counsel.
In the end, lawyers are responsible for their work and must ensure that their client's interests are protected. While AI can assist law firms in improving efficiency, it cannot substitute a lawyer's expertise and experience.
Aditi Prabhu is an Associate Partner at Mumbai-based law firm, Desai Desai Carrimjee and Mulla.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.