As Artificial Intelligence (AI) creeps into every facet of daily life, its deployment and development have raised new issues for policymakers to consider. Recent online and news media highlight concerns over the re-use of data for model development. For example, how copyright and data protection laws apply to web scraping. The opacity of platform algorithms assigning jobs to gig workers or setting prices for consumers is another brewing problem.
With the European Union’s Artificial Intelligence Act recent entry into force on August 1, we can expect an uptick in interest in AI regulation. The EU AI Act adds to a growing collection of laws that regulate AI.
Three archetypes have emerged globally. There are targeted laws that regulate the use of AI for specific industry applications. For example, both China and the EU have introduced regulation of recommendation systems deployed by online platforms. Another archetype is the extension of data protection laws to regulate AI whenever personal data is used, such as for automated decision-making. Finally, the EU AI Act is an overarching law that regulates AI systems according to the risks they pose.
In Singapore, Minister Josephine Teo has recently said that there are no immediate plans to introduce overarching laws to regulate AI. Existing laws already address harms associated with AI and, if necessary, they can be updated to address inadequacies. Singapore has opted for a nuanced approach that places equal emphasis on the twin engines of AI adoption and consumer protection that will enable our economy and society to take flight in the age of AI.
We do not always need to regulate. Interventions can also take the form of forbearing to deploy AI in certain settings: the Chief Justice has said that the process of judging – which is an exercise of our shared humanity – should remain a largely human endeavour, in order to preserve empathy and reflect the values of our judicial system. Any departure from this position should be considered thoughtfully.
When regulations are necessary, Singapore’s multi-pronged approach supports AI adoption, clarifies how existing regulations apply and builds future capabilities for regulation.
First, existing laws have been amended to support the use of AI, thereby enabling the economy to benefit from broader AI adoption. The Copyright Act 2021, for example, has been amended to clarify that copyrighted material may be used for machine learning provided that the model developer had lawful access to the data. Amendments to the Personal Data Protection Act (PDPA) 2012 enabled the re-use of personal data to support research and business improvement, after model development using anonymised data proved to be inadequate. Detecting fraud, preserving the integrity of systems and ensuring physical security of premises are also recognised as legitimate interests for using personal data in AI systems.
Second, regulatory guidance has been issued on how existing regulations that protect consumers will also apply to AI systems. The Personal Data Protection Commission has issued a set of advisory guidelines on how the PDPA 2012 will apply at different stages of model development and deployment whenever personal data is used. It also clarifies the level of transparency expected from organisations deploying AI systems and how they may disclose relevant information to boost consumer trust and confidence. Another example is the Health Sciences Authority’s regulatory guidance for software medical devices, which has been expanded to include specific requirements when AI medical devices are submitted for registration. These are examples of how Singapore is extending existing regulatory frameworks to cover AI systems.
Third, Singapore is developing capabilities for AI testing and certification. For effective regulation, principles for responsible AI must not only be translated into process and technical standards, but we must also find ways of objectively measuring whether these standards are met. Efforts to develop standards of practice in healthcare, financial services and the broader digital economy are found in the Ministry of Health's (MoH) AI in Healthcare Guidelines (AIHGle), the Monetary Authority of Singapore’s (MAS) Fairness, Ethics, Accountability and Transparency (FEAT) principles, and the Infocomm Media Development Authority's (IMDA) Model AI Governance Framework respectively. These efforts are complemented by initiatives to develop testing and certification capabilities, such as MAS’ Veritas and IMDA’s AI Verify.
Finally, the Singapore Computer Society has been taking a lead in equipping professionals with the knowledge and application of ethical AI practices in their work and organisation through their joint certification course in AI Ethics and Governance with Nanyang Technological University since 2021. To date, close to 500 professionals have undergone the certification course.
Singapore is poised to benefit from the ongoing AI revolution. Our laws have been updated to support AI adoption and we are gradually calibrating extant regulatory levers to deal with harms posed by AI more effectively. We are ready to introduce new laws if necessary, and we are developing our AI assessment capabilities. By staying adaptable and forward-thinking, we ensure that Singapore remains at the forefront of AI innovation while safeguarding our society.
Yeong Zee Kin is the Chief Executive of the Singapore Academy of Law (SAL).
This opinion piece is adapted from a chapter he contributed to the Singapore Computer Society’s AI Ethics and Governance Body of Knowledge Version 2.0.