Search

Quality and consistency through collaboration

All.FirmWide services.Cyber and Privacy

As Artificial Intelligence (AI) tools become more accessible, there is the temptation to use publicly available models such as ChatGPT or Google Gemini to seek legal advice. After all, how does it differ from consulting a conventional search engine when searching for an answer or a steer in the right direction? 

Unfortunately, using AI as your legal assistant  comes with numerous pitfalls you should consider. This is especially the case if you are entering confidential, personal or other commercially sensitive information into the tool.  In this article, we explore some of the key risks, including potential inaccuracies, breach of confidentiality, waivers of legal professional privilege and privacy concerns.

The ‘Hallucination’ problem

Large language models (such as ChatGPT, Google Gemini and Perplexity AI) are examples of generative AI known for their ability to instantaneously produce large amounts of sophisticated text from minimal prompts. Generative AI operates in a similar way to predictive text, generating responses based on patterns and predictions on what text is statistically likely to follow, rather than verified facts.

This may lead to ‘Hallucinations’ which occur when AI models produce inaccurate, misleading or sometimes entirely fabricated results. This may include false cases, incorrect legal principles, or even non-existent laws. There are growing numbers of cases of lawyers (even experienced ones) being caught out misusing AI. When using AI for legal research you almost need to work back from the answer in order to validate its accuracy and the sources relied upon. 

AI and the duty of confidentiality

When seeking legal advice, you are usually doing so because a specific set of facts has given rise to a legal issue - facts that you might be tempted to use to prompt the AI.

However, when you are interacting with an AI model, you are not necessarily interacting with someone who is bound by and who you can rely on to observe obligations of confidentiality in the same way as a colleague or a legal advisor would be.  Instead, you are engaging with software or an application that may not be able to reliably distinguish between confidential and non-confidential information, increasing the risk of unintended disclosures. Behind that software or application sits a technology vendor that may or may not be technically and contractually restricted in their ability to access information entered into the AI model and that may use the information to further train or improve the model.

Although, some platforms offer deletion features, these are often limited due to the “black box” nature of AI (and may only involve the deletion of chat history or account information). Much like the human brain, once information is absorbed by the AI, it becomes deeply embedded and difficult, if not impossible, to fully erase.  This is especially the case if the data has been used to train the model. 

There is also the risk that any data you input into an AI model could be disclosed under certain legal circumstances, such as a court order. Recently in the United States, the Federal Court ordered 400 million chat logs (including deleted chat logs) to be disclosed to the court as part of discovery in a case brought by the New York Times against Open AI.[1]

It is best to assume that feeding an AI model with information is akin to putting it in the public domain.

Waiving legal professional privilege

Legal professional privilege protects confidential communications and documents between a lawyer and their client from mandatory disclosure.  Such communications or documents must be made for the dominant purpose of providing legal advice or professional legal services or for use in current or anticipated litigation.  Legal professional privilege also applies to in-house lawyers who must show the document was brought into existence in the course of the performance of the lawyer’s professional role. The rationale for legal professional privilege is that clients must be able to communicate openly and freely with their lawyer.

However, privileged communications must remain confidential. Privilege can be waived by the client by acting in a way inconsistent with preserving the confidence of a communication. This might include where the client discloses the information into a publicly available AI model.

The Victorian Legal Services Board has issued a statement saying that lawyers cannot safely enter confidential, sensitive or privileged client information into public AI chatbots/copilots (like ChatGPT), or any other public tools. If lawyers use commercial AI tools with any client information, they need to carefully review contractual terms to ensure the information will be kept secure.[2]

Providing personal information to an AI model?

Specific privacy concerns arise when entering personal information (i.e. information that can identify an individual such as names, addresses, phone numbers, dates of birth or various health related information) into an AI model. 

Under the Privacy Act 1988 (Cth), personal information may only be used and disclosed for the purpose for which it was collected (primary purpose) or for a secondary purpose if the individual has consented or would reasonably expect the use or disclosure for the secondary purpose and that secondary purpose is related to the primary purpose.  Consent may therefore need to be obtained before disclosing personal information to an AI model, especially where the personal information is used for training purposes.

In addition, an organisation may need to ensure that sufficient contractual protections are in place with the AI vendor if personal information is transferred outside of Australia. Likewise, organisations should conduct adequate due diligence to ensure the security of the AI product, including an assessment of security measures implemented by the vendor to protect against threats and cyberattacks. 

As a matter of best practice, the OAIC recommends that organisations do not enter personal information, and particularly sensitive information, into publicly available generative AI tools, due to the significant and complex privacy risks involved.

Conclusion

So, the next time you consider getting legal advice from your AI assistant, think again.  Or at least, think about what information you are disclosing, who you are really disclosing it to and what risks you might be exposing yourself to.     

 

[1] Bankston, K. (2025, June 25). In ChatGPT case, order to retain all chats threatens user privacy. Center for Democracy and Technology. Retrieved from https://cdt.org/insights/in-chatgpt-case-order-to-retain-all-chats-threatens-user-privacy/

[2] Victorian Legal Services Board and Commissioner, Statement on the Use of Artificial Intelligence in Australian Legal Practice (Web Page, 6 December 2024) https://www.lsbc.vic.gov.au/news-updates/news/statement-use-artificial-intelligence-australian-legal-practice

 

Return To Top