Artificial Intelligence, Judiciary 
Columns

Unlocking the potential of large language models in Artificial Intelligence: Challenges and need for regulation

A multi-faceted apporach is essential to regularte LLMs, one that harmonizes innovation with ethical considerations while upholding individual rights.

Sanhita Chauriha

Earlier this week, the Delhi High Court issued an interim “John Doe” order to prevent social media platforms, e-commerce websites and individuals from using actor Anil Kapoor’s name, voice, image, or dialogue for commercial purposes without authorization. The Court has prohibited the use of Artificial Intelligence (AI) tools to manipulate his image and the creation of GIFs for monetary gains. Furthermore, the Court directed the Union Ministry of Electronics and Information Technology to block pornographic content featuring altered images of the actor.

Throughout the world, we have witnessed instances of these technologies being applied for both positive and negative purposes. Recently, a British judge employed a chatbot to gain a deeper understanding of a case, and earlier this year, another High Court Judge in India utilized ChatGPT in a bail case.

There have been reported cases of scammers employing AI voice and video manipulation techniques when making phone calls. They use these deceptive methods to request money from unsuspecting individuals, often posing as relatives facing apparent emergencies.

AI is becoming increasingly integrated into various aspects of our lives, including content creation. It becomes imperative to strike a balance between innovation and safeguarding individual rights. These cases cited above serve as a reminder of the need for comprehensive regulations to ensure responsible AI use, protect privacy and prevent the misuse of advanced technologies. It sets the stage for important discussions about how AI can be harnessed ethically and responsibly in the future.

Understanding the technology

Imagine you have a super-smart computer program that can read and understand just about any text in the world. It is like a genius language expert that can answer your questions, write essays and even create stories that sound like they were written by famous authors. These super-smart computer programs are called Large Language Models (LLMs). LLMs like GPT-3.5 are engineered to assimilate and engender human-like textual content, predicated upon vast repositories of training data. They boast of multi-faceted competencies encompassing diverse natural language processing tasks, from text completion and translation to summarization and even creative content generation.

LLMs: What is causing all the buzz?

LLMs offer remarkable performance by generating human-like text fluently and coherently across various language tasks. They can generalise knowledge across domains, making them adaptable for different applications. Their translation abilities can break language barriers and promote global accessibility.

However, their power also raises ethical concerns, including potential misuse, misinformation and biases. Responsible use and ethical guidelines are crucial. LLMs have spurred innovation in natural language processing and are valuable in healthcare, finance, customer service and content generation. They enhance language understanding and chatbot interactions, leading to more natural conversations. Their rapid development has prompted policymakers to consider new regulations addressing privacy, security and content moderation in the context of these potent language models.

Challenges associated with LLMs

LLMs are susceptible to the inadvertent perpetuation of biases embedded in their training data, leading to the reinforcement of stereotypes and instances of discrimination. This becomes particularly evident when a biased language model generates prejudicial content in sensitive subject matter. Secondly, their capacity to craft highly convincing counterfeit news, deepfakes and other forms of disinformation poses a substantial threat to public trust and the integrity of disseminated information. LLMs possess the capability to harvest and fabricate personal information, potentially infringing upon individual privacy rights, as exemplified by convincing phishing emails or life-like digital avatars mimicking real individuals in online interactions. LLMs can be harnessed for malicious purposes, including the automation of cyberattacks, spam generation and the propagation of harmful propaganda.

In response to these challenges, there have been notable global efforts to establish a regulatory framework for LLMs. These initiatives include the formulation of AI ethics protocols - prominently advocated by the European Union in its AI Act - which emphasize the necessity of trustworthy AI characterized by transparency, fairness and accountability. Major platforms like Facebook have implemented AI-driven content vetting mechanisms to scrutinize and flag potentially harmful or misleading information generated by LLMs, serving as a real-time moderation example. India is set to regulate online harms of AI in its proposed Digital India Act. OpenAI, the creator of GPT-3.5, has introduced usage policies that restrict the deployment of its AI models in high-risk applications, notably including the manipulation of deepfakes.

Furthermore, proponents of LLM regulation strongly endorse the concept of third-party audits, which could provide independent assessments of AI systems, evaluating their safety, impartiality and adherence to ethical standards. These regulatory endeavours aim to strike a balance between harnessing the potential of LLMs and mitigating the inherent risks they pose.

Legal implications arising from the use of LLMs

Data Privacy Concerns

Data security and consent are paramount when it comes to LLMs. To train these models, access to extensive and diverse datasets is required. Failing to implement robust security measures can lead to data breaches, potentially exposing sensitive personal information. For example, a company utilizing LLMs for personalized content generation may become vulnerable to hacking or unauthorized access if user data is not adequately protected.

The process of training LLMs often involves aggregating massive datasets, which may include extraneous or irrelevant data. This could conflict with the principle of data minimization, as companies might end up collecting more data than necessary for model training. Moreover, ensuring user consent and control is challenging. Users may not fully understand how their data is utilized to train LLMs and often consent to vague terms and conditions.

Concerns in Competition Law

LLMs can lead to data monopoly. Companies with access to extensive datasets gain a competitive advantage in training more advanced LLMs. This can lead to data monopolies where dominant players control vast amounts of valuable data, making it challenging for new entrants or smaller competitors to compete effectively. Discriminatory behaviour is another issue, potentially violating competition laws by favouring certain businesses or excluding others. For example, an e-commerce platform using an LLM to prioritize specific sellers over others could harm fair competition, raising concerns about antitrust regulations.

Intellectual Property Law concerns

The impact of LLMs on intellectual property (IP) laws revolves around content creation, ownership and infringement. Determining the ownership of AI-generated content remains intricate, as existing copyright frameworks may not adequately address the unique nature of AI-generated creative works. For instance, when an LLM generates music or literature, the question of copyright ownership becomes complex.

Infringement and plagiarism concerns arise when AI-generated content inadvertently violates existing copyrights, especially when the model has been trained on a broad range of texts. The concept of fair use and transformative works in the context of AI-generated content requires careful consideration. Deciding whether AI-generated content constitutes fair use or transformative work is not always straightforward. For example, determining if an AI-generated remix of a song is transformative and whether it infringes on the original copyright can be a complex legal issue.

Charting the regulatory path for LLMs

A multi-faceted apporach is essential to regularte LLMs, one that harmonizes innovation with ethical considerations while upholding individual rights. The LLM regulation encompasses several crucial facets. It advocates for unwavering transparency in data collection and model training, accompanied by meticulous record-keeping to facilitate regulatory audits. Bias mitigation strategies, diversity in training data and accountability mechanisms are emphasized to ensure fairness and ethical use of LLMs. Stringent guidelines delineating acceptable LLM applications, along with punitive measures for violations, can form an integral part of the framework. Oversight by a dedicated regulatory body and third-party audits enhances compliance and transparency. Public awareness initiatives and digital literacy programs aim to empower citizens in navigating AI-generated content. Investment in research and development, international collaboration, adaptable regulations, strengthened data protection and a comprehensive approach are required.

A way forward

LLMs represent a transformative leap in artificial intelligence, opening doors to unprecedented possibilities across various domains. These models have the capacity to generate creative content, aid in research and enhance automation, but their exponential growth raises critical considerations. From data privacy concerns to potential biases, ethical dilemmas and the evolving regulatory landscape, the journey of LLMs is marked by both promise and challenges. It is imperative that as we continue to harness the power of LLMs, we do so with unwavering commitment to ethical principles, transparency and accountability. The responsible development and regulation of LLMs are pivotal to ensuring that these powerful tools serve the greater good, enriching our lives while respecting our values and rights. As LLMs continue to evolve, it is our collective responsibility to steer their trajectory toward a future where innovation aligns seamlessly with ethics and humanity’s best interests.

Sanhita Chauriha works in the Applied Law and Technology vertical of Vidhi Centre for Legal Policy.

Supreme Court scraps CBI probe into custodial torture of RG Kar protestors; orders SIT probe instead

Supreme Court rejects PILs to delete 'secular', 'socialist' from Constitution's Preamble

Right to be forgotten: Delhi High Court calls for masking of couple’s names on search engines

CG-LEARN to be launched as part of HNLU Constitution @75 celebrations

Luthra advises lenders' consortium on financing Adani Hybrid Energy's Gujarat hybrid power project

SCROLL FOR NEXT