AI and Australian law

Understand how existing Australian laws apply to common artificial intelligence risks

Below you can find a summary of general laws that apply to AI harms. This list is not comprehensive and you need to be aware of what laws apply to your organisation and in your specific setting. 

In addition to the list below, a range of obligations under the Fair Work Act apply to workplaces including the requirement to consult with employees in certain circumstances which can include the introduction of AI in the workplace.    

Security challenges

Laws that may apply to organisations that have not secured their AI systems include:

  • Directors’ duties (e.g. to exercise powers and discharge duties with due care and diligence), to assess and govern risks to the organisation (including non-financial risk e.g. from AI and data).
  • Privacy laws, including the Australian Privacy Principles, require steps that are reasonable in the circumstances to protect personal information and impose data minimisation obligations to destroy or deidentify information no longer needed.  
  • The Security of Critical Infrastructure Act and sector specific laws (e.g. financial services), impose risk management and cybersecurity obligations.
  • Negligence, if a failure in risk management practices amounts to a failure to take reasonable steps to avoid foreseeable harm to people owed a duty of care, and that failure causes the harm.  
  • Work health and safety laws, if a person conducting a business or undertaking has not done all that is reasonably practicable to prevent or minimise physical or psychosocial risks to workers or others who may be affected by the carrying out of work.  
  • Online safety laws, if certain online service providers fail to take pre-emptive and preventative actions to minimise harms from online services. 

If AI systems are not properly secured this can result in data leakage, privacy breaches and pose risks to safety for customers, employees and infrastructure.

Producing misleading outputs

Laws that may apply to organisations where AI systems have resulted in misleading representations: 

  • The Australian Consumer Law prohibitions against unfair practices (e.g. misleading and deceptive conduct and false and misleading representations) may apply: 
    • if the outputs are misleading (e.g. deceptive use of deepfakes) 
    • to misleading representations or silence as to when AI is being used 
    • to misleading statements as to the performance and outputs of the AI systems.

AI systems can produce misleading statements due to hallucinations or be used maliciously to deceive people.

Harmful systems and outputs

Laws that apply to harms that arise from AI system misuse or malfunction include:

  • Product liability (where the organisation is a manufacturer), if outputs result in harm caused by a safety defect (e.g. a defect in the design, model, manufacturing or testing of the system, including failure to address bias or cybersecurity risk) and other product safety laws (including recalls and reporting).
  • Work health and safety laws where outputs introduce physical or psychosocial risks or harms to workers or others affected by the carrying out of work. Persons conducting a business or undertaking are responsible for managing health and safety risks, including consulting workers and providing training on safe AI use, and cannot shift liability to AI systems.  
  • Criminal laws, if the output resulted in, or aided or abetted the commission of a crime.
  • Online safety laws, if the outputs are restricted or harmful online content (such as cyberbullying or cyber-abuse material, or non-consensual sharing of intimate images or child sexual abuse material).
  • Defamation laws, if the outputs are defamatory and the organisation participated in the process of making the defamatory material available (such as through making the tool available or training) rather than merely disseminating the content.
  • Privacy laws where outputs involve sensitive information generated, used and disclosed without the consent of an individual.  
  • Negligence, if an organisation fails to exercise the standard of care of a reasonable person to avoid foreseeable harm to persons to whom it owes a duty of care, and that failure causes the harm.

AI systems can fail or be misused, producing harm to customers, employees or the broader community.

Misuse of data or infringement

Laws that apply when data or personal information has been misused include:

  • Intellectual property laws (including copyright), privacy laws, duties of confidence and contract, protect the use, reproduction and/or disclosure of data (including training data, input data and outputs) and the model or system without the requisite consents or rights.
  • Privacy laws that regulate the collection, use and disclosure of, personal information and impose transparency (with specific provisions for some automated decision making to apply from 10 December 2026) and data minimisation requirements on the handling of personal information, and provide for a statutory tort for serious invasions of privacy, which commenced 10 June 2025.
  • The Australian Consumer Law prohibitions against misleading and deceptive conduct, unconscionable conduct and false and misleading representations, may apply to unfair data collection and use practices.  
  • Work health and safety laws, if information is collected as part of the conduct of a business or undertaking and misuse of that data creates a risk to workers or others who may be affected by the carrying out of work.
  • State and territory laws where AI is used for workplace surveillance.

AI systems rely on data which needs to be secure, as well as lawfully collected and used.

Incorrect or poor-quality outputs

Laws that apply to inaccurate data or outputs include:

  • Privacy laws impose quality and accuracy obligations that may apply to training and input data (that is personal information) and outputs (where new personal information is generated).
  • Systems that produce inaccurate or erroneous outputs such as ‘AI hallucinations’ may be in breach of statutory guarantees under the Australian Consumer Law (e.g. consumer goods be of acceptable quality and fit for purpose, or consumer services be rendered with due care and skill).
  • Work health and safety laws where any risk is created for workers or others from incorrect or poor-quality outputs from AI. For example, if AI is intended to monitor interactions between workers and plant in a warehouse, an incorrect output from AI could allow a worker to be hit or crushed by the plant.
  • The Fair Work framework where AI technologies are being introduced and used in workplaces. For example, used in hiring, firing and performance management.

Data that AI systems rely on and the outputs they produce may be inaccurate.

Bias, exclusion and access

Laws that apply to bias and exclusion include: 

  • Anti-discrimination laws, including the Fair Work Act if outputs negatively exclude or disproportionately affect an individual or group on the basis of a protected attribute. Organisations should also ensure they meet obligations in enterprise agreements where applicable.
  •  Work health and safety laws, where discrimination and bias arising from the use of AI may create a risk to the health and safety of workers and others affected by the carrying out of work. Discrimination is a psychosocial hazard under the model WHS laws. 
  • Prohibitions on unconscionable conduct under the Australian Consumer Law, if the exclusion of a consumer was so harsh that it goes against good conscience.
  • Essential services obligations, e.g. if used in energy and telecommunications essential services.

AI systems may exclude people from processes, products or services.

Supply chain impacts

Laws that apply across the supply chain include:

  • Privacy laws, to be open and transparent in managing personal information, including privacy policies and collection notices setting out where personal information is collected from or disclosed to third parties. See Guidance on privacy and developing training generative AI models, OAIC.
  • The Australian Consumer Law prohibitions on unfair practices (e.g. misleading and deceptive conduct) and unfair contract terms in how an organisation engages with consumers and other businesses.
  • The Australian Consumer Law statutory guarantees, (e.g. that consumer goods be of acceptable quality and fit for purpose, or that consumer services be rendered with due care and skill) apply to business to business relationships where a party meets the test of a consumer.
  • Anti-competitive and restrictive trade practices under competition laws, apply to how organisations engage in trade or commerce, including using AI systems to engage in anti-competitive conduct
  • Product liability may require manufacturers to indemnify suppliers under the statutory guarantees, and proportional liability laws can restrict the liability of concurrent wrongdoers to their proportionate contribution.
  • Work health and safety laws require designers, manufacturers, importers, suppliers and installers of plant (including software), substances or structures used at work to ensure, so far as reasonably practicable, that their products are without risk to the health and safety of people who use them.

Where to go for help with AI harms

AI can create risks for people, businesses and communities. If you need support with an AI-related issue, different Australian Government agencies can provide guidance or reporting pathways. 

Find out where to go for help with online safety, cyber security, privacy, consumer issues, workplace rights and human rights.

Related resources

eSafety Commissioner:

  • Generative AI position statement: Information on generative AI for technology industry professionals, academics and subject matter experts. 
  • Safety by Design foundations: Practical guidance to put safety at the centre of your organisation, for technology industry developers, leaders and governance professionals.

Office of the Australian Information Commissioner:

Creative Australia:

Safe Work Australia:

  • Safe Work Australia is a national body providing Information for organisations and workers on health and safety risks and duties.

Fair Work Ombudsman: