Quick guides to support your AI journey
Artificial Intelligence (AI), once the preserve of tech professionals, became a global phenomenon in June 2020 with the release by Open AI’s ChatGPT-3 model. Since then, many other AI models have emerged. The Australian, state and territory governments, in line with their international counterparts, recognise the opportunities presented by AI but also acknowledge the associated risks. They are addressing these risks through various legislative, regulatory and other measures. Our guides help you understand the measures and actions you can take to secure your organisations use of AI is safe and responsible.
Voluntary AI Safety Standard
Published by the Department of Industry, Science and Resources in September 2024, the Voluntary AI Safety Standard comprises 10 guardrails that apply to all organisations through the AI supply chain. The intention is that by adopting the Voluntary Guardrails, organisations can use and innovate with AI in a consistent, safe, and responsible way.
Mandatory Guardrails
Published by the Department of Industry, Science and Resources in September 2024, the Mandatory Guardrails would apply to deployers, and developers of AI used in high-risk settings. The Mandatory Guardrails require developers and deployers of high-risk AI to take proactive measures to ensure their products are safe and to reduce the likelihood of harm arising.
AI and Director Duties
Many organisations are rapidly adopting AI, including in organisational functions such as strategy, corporate finance, and risk. Directors are ultimately responsible for the oversight of risk management in an organisation, including risks arising from the use and deployment of AI. This infographic sets out some of the key issues directors should be aware of in using and deploying AI and sets out some practice steps they should take.