Harnessing the AI evolution
03 December 2024Artificial Intelligence (AI) is not a new concept. But now that AI’s transformative capabilities have been recognised and large language models are being used, we need to understand the legal impacts of AI. In order to do so we need to understand the ethics around the use of AI, the maintenance of data governance and develop an understanding of relevant risk frameworks.
Before going further, it’s useful to define AI. The Organisation for Economic Co-operation and Development (OECD) definition seems to be generally accepted, namely, ‘An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.’[1]
AI has many benefits: it will enable us to do many tasks more quickly and simply. AI can analyse massive amounts of data quickly. As such it enables real time decision making on a scale and speed that is faster than humans can perform the task. AI will therefore bring financial and social gain. As an example, the Australian Financial Review, (3 November 2024 p 20) gave an example of a company where the founder developed a chat bot to sell the company’s product. The bot managed to do the work that would otherwise be done by 40 people at $77,000 salary per person per year. Similarly, Sam Altman of Open AI has indicated that AI will enable a founder to build a $1 billion company without hiring anyone.
However, AI also comes with risks/costs. There is a social cost in that some workers will need to be redeployed. Other risk elements arise because AI is becoming more autonomous and its cognitive capabilities are expanding at pace.
As such the inputs into the AI capability are critical. AI is capable of causing harm, for example, through bias and misinformation. The new forms of AI that involve deep learning and generative AI, (where a computer is trained to handle data like a human and where large data sets are used as part of the learning process where the AI can adjust to new inputs), can actually search for new inputs and draw conclusions or sometimes even ‘hallucinate’ from inputs.
For example, in the USA case of Mata v Avianca, Inc (2023) the plaintiff’s legal representative used ChatGPT for their legal research and the AI software created fake cases in support of their client’s position. The Court tasked with considering the lawyer’s errors found, in using AI without checking the results and without declaring the AI usage, the lawyers concerned had engaged in: ‘bad faith on the part of the individual Respondents based upon acts of conscious avoidance and false and misleading statements to the Court. (See, e.g., Findings of Fact 17, 20, 22-23, 40-41, 43, 46-47 and Conclusions of Law 21, 23-24.) Sanctions will therefore be imposed on the individual Respondents. Rule 11(c)(1) also provides that “[a]bsent exceptional circumstances, a law firm must be held jointly responsible for a violation committed by its · associate, or employee.” Because the Court finds no exceptional circumstances, sanctions will be jointly imposed on the Levidow Firm. The sanctions are “limited to what suffices to deter repetition of the conduct or comparable conduct by others similarly situated.” Rule 11(c)(4).’[2]
AI can also make serious mistakes. Those mistakes could adversely impact humans especially where AI is used to make discretionary decisions such as whether to issue licences, visas, or consider eligibility for public benefits or to assist in law enforcement.
As a result of these AI risk factors, governments have entered the arena to implement guard rails to ensure that AI is safe. The OECD published AI Principles to promote the use of AI that is innovative and trustworthy and that respects human rights and democratic values in 2019. These principles were updated in May 2024.[3] Noting the speed of technology versus the speed of the regulatory environment, governments are playing catch up. Further there is a risk that different governments will adopt different approaches and requirements. The EU is ahead of the game with its AI Act (EU AI Act).[4] Australia is presently consulting on its Proposals paper for introducing Mandatory Guardrails for AI in high-risk settings. In a recent statement in the ‘National Framework for the assurance of artificial intelligence in government’, Australian Data and Digital Ministers have stated ‘We recognise that public confidence and trust is essential to governments embracing the opportunities and realising the full potential of AI. To gain public confidence and trust, we commit to being exemplars in the safe and responsible use of AI. This requires a lawful, ethical approach that places the rights, wellbeing and interests of people first.’[5]
In this context a joint federal/state National Framework For The Assurance Of Artificial Intelligence In Government was released in June 2024. This assurance framework seeks to implement the eight (8) Australian Ethics principles.[6]
The Australian Ethics Principles
The Australian Ethics Principles set out some framework rules: notably, the AI system should benefit society.
Human centred values should be acknowledged and included within the rules that guide the AI systems. Importantly AI will need to be designed to comply with human rights laws and related policies and guidelines. AI will need to be designed to incorporate diverse perspectives or at least not to be biased to one or a limited number of perspectives.
AI systems should be inclusive and accessible and not unfairly discriminate against people
AI systems need to comply with privacy obligations. The Privacy Act continues to apply even if AI is used to collect, manipulate or disclose data.
AI systems need to operate reliably and in accordance with their intended purpose. This means AI systems need to be tested and verified over their life span.
There should be transparency about when AI is being used; the use of AI needs to be disclosed. This can be particularly relevant if AI is used in government decision making.
People need to be able to challenge AI decisions.
In addressing these ethics principles, the government’s assurance framework acknowledges the cornerstones of data governance and risk.
Data governance
Data governance is about ensuring the veracity of the data used by AI.
It has been acknowledged that some generative AI engines, such as ChatGPT, which operate on large language models, can include data that is incorrect, misinformed or biased. Therefore, its critical to create, collect, manage, use and maintain datasets that are authenticated, reliable, accurate and representative. This is a continuous task; it’s not a set and forget task. And as the Robodebt scandal demonstrated, it’s also important that legislative compliance is included within AI algorithms.
To add to the complexity, whatever Australia does in this area should be interoperable with the regulatory developments of our international colleagues and trading partners.
Risk
Risk identification and management is at the heart of considering the use of AI systems. The EU AI Act is risk based. The EU AI Act classifies AI according to the level of risk and the requirements that the EU AI Act enacts around testing, transparency and accountability, are based on the level of risk in the AI system.
A useful tool when dealing with the practicalities of AI in systems in Australia is the NSW Artificial Intelligence Assurance framework (NSW AIAF).[7]
The framework is aligned to NSW Ethics policy[8] and provides guidance for projects ranging from low to high risk. It also provides for the identification of and escalation of high-risk projects. For example, a project that provides for autonomous decision making, the autonomous operation of a vehicle, or uses biometric face matching will fall into a high-risk category that warrants deeper consideration and risk mitigation controls.
The framework is a life cycle framework and covers projects from initiation, design, procurement, deployment, operation and then continuous monitoring and evaluation.
The NSW AIAF framework does not prohibit AI in high risk situations but cautions that ‘care should be taken to ensure independent evaluation and monitoring for potential harms at different stages of the system lifecycle. The level of independence in the review process should be heightened for elevated risks. ….Language models and generative AI used for decision making, prioritisation or automation, require special care around output validation, ensuring a final decision is made by an appropriately authorised and qualified person.'
Conclusion
Given the pace of technology and the relatively vast but untested use cases of AI it will be difficult for developers and users of AI to implement and comply with new and evolving regulatory frameworks. However, risk-based frameworks, particularly where ethics is carefully addressed, provide useful parameters within which decisions around AI development and usage can be made, and the relevant risks can be understood and managed.
Conversely, in an environment where governments are playing regulatory catch-up, governments should consider adopting principles-based approaches that operate on the basis of ethics principles and risk based frameworks in order to remain relevant and effective. Providing relevant guard rails for new technology development will enable humans to obtain the benefits from AI developments whilst managing the risks surrounding AI usage.
[1] Explanatory Memorandum on the Updated OECD Definition of an AI System, OECD Artificial Intelligence Papers, March 2024 No.8. The European Union, Council of Europe, United States and United Nations have adopted this definition.
[2] United States District Court, S.D. New York ( 22-cv-1461 (PKC), decided June 22, 2023
[3] OECD AI Principles, https://oecd.ai/en/ai-principles
[4] Approved 13 March 2024.
[5] https://www.finance.gov.au/government/public-data/data-and-digital-ministers-meeting/national-framework-assurance-artificial-intelligence-government/statement-data-and-digital-ministers
[6] Australia’s AI Ethics Principles (DISR 2019)
[7] https://architecture.digital.gov.au/nsw-artificial-intelligence-assurance-framework
[8] Broadly the same as the Australian AI ethics policy and the OECD AI Principles