Search

Quality and consistency through collaboration

All.Government.Government Regulatory Law

As the use of AI continues to increase in prevalence, it will have a significant impact on agencies’ approach to contract management and procurement. Agencies should ensure that they are identifying and managing the risks of AI, particularly in non-AI based contracts and procurements.

AI and procurement – recent developments

In June 2024, the Department of Industry Science and Resources (DISR) published the National Framework for Assurance of Artificial Intelligence in Government (Framework), which is a joint approach to the safe and responsible use of AI by the Australian, state and territory governments.

The Framework recognises the opportunities of AI for all levels of government, but also acknowledges the risks of AI, stating as follows:

We recognise that public confidence and trust is essential to governments embracing the opportunities and realising the full potential of AI. To gain public confidence and trust, we commit to being exemplars in the safe and responsible use of AI.

The Framework is informed by Australia’s AI Ethics Principles and sets out ‘cornerstones of assurance’ which provide a ‘nationally consistent approach’ for the assurance of AI use in government. In relation to procurement, the Framework states that:

Careful consideration must be applied to procurement documentation and contractual agreements when procuring AI systems or products. This may require consideration of:

  • AI ethics principles
  • clearly established accountabilities
  • transparency of data
  • access to relevant information assets
  • proof of performance testing throughout an AI system’s life cycle.

In addition:

  • On 15 August 2024, Senator the Hon Katy Gallagher, Minister for Finance, Women and the Public Service and the Hon Ed Husic, Minister for Industry and Science, released the Policy for the responsible use of AI in government (AI Policy), which includes a requirement for agencies[1] to make publicly available a transparency statement:
    • outlining their approach to the adoption of AI, within six months of the policy taking effect, and
    • providing information on the agency’s use of AI, including information on compliance with the AI Policy, measures to monitor effectiveness of deployed AI systems and efforts to protect the public against negative impacts.
  • On 5 September 2024, DISR published the Voluntary AI Safety Standard (Safety Standard), which provides “practical guidance to all Australian organisations on how to safely and responsibly use and innovate with artificial intelligence”. The Safety Standard includes 10 ‘voluntary guardrails’, which are intended to establish a foundation for safe and responsible use of AI. In addition, the Safety Standard includes ‘procurement guidance’ to assist in aligning procurements with the guardrails.

While the policy and guidance regarding AI continues to evolve and grow, so does the spread of AI. Ultimately, the use of AI is a risk management issue and agencies need to manage this risk effectively. 

How do agencies identify and manage the risk of AI

Agencies that are expressly procuring AI are likely to have put in place measures to manage the risk. However, with its use becoming more and more prevalent, AI may increasingly:

  • Be part of tendered solutions to non-AI based requests.
  • Be used by current contractors (even if not part of the original solution).

It is in these circumstances that agencies need to be particularly mindful that they are identifying and, if required, managing the risk of AI.

We recommend that agencies consider taking the following steps to address these risks:

  • Prepare the AI transparency statement

If they have not already done so, agencies should prepare an AI transparency statement outlining their approach to AI adoption and use (this is a requirement for most Commonwealth agencies).

  • Undertake an AI audit of:
    • Current (non-AI based) contracts—Check if AI is being used and, if it is, check whether any unmanaged AI risk needs to be mitigated.
    • Upcoming (non-AI based) procurements—Check if AI may be part of the tendered solution and whether the market testing documents should ask further questions that identify AI and assess tenderers’ AI risk management.
  • Undertake a review of the templates and policies supporting the procurement framework.

Agencies should check that their policies and templates ensure that tendered AI is identified and the risk assessed and managed (if required). Some of the questions agencies should ask, include:

  • Does the procurement planning process include consideration of whether AI may be part of the solution?
  • Do the market testing documents ask if AI may be part of the tendered solution and how the tenderer proposes to manage the risk?
  • Does the evaluation process look at assessing the risk of any tendered AI and evaluate the proposed mitigations?

If you have any queries on the above recommendations, please contact Robert Watson (Partner) and Stephen Newman (Special Counsel).

 

[1] The AI Policy does not apply to Corporate Commonwealth entities, Defence and agencies that form part of the ‘national intelligence community’ as defined by section 4 Office of National Intelligence Act 2018 although they may voluntarily adopt elements of the AI Policy where they can do so without compromising national security capabilities or interests.

 

 

Return To Top