Data security

Understand how artificial intelligence uses data so you can manage risks and stay safe

When you use AI, the system processes your data and in some cases stores it. Keeping data secure is essential for managing risk and meeting your legal obligations.

Data security means protecting data from unauthorised access, misuse or breaches.

Strong data security helps:

  • reduce the risk of cyber attacks that can be costly to resolve
  • protect personal and business information
  • maintain trust and credibility
  • support safe and responsible AI use.

Understand your data sources

When using AI tools, data could be:

  • entered directly by staff
  • uploaded as files
  • shared through connected systems or integrations
  • generated or inferred by the AI system itself.

Some risks may not be obvious at first, particularly when using public or third-party AI tools that are cloud-based.

Prepare before using an AI tool

Before you start an AI project, check the following:

  • Is the AI tool tested and suitable for what you need it to do?
  • What data will be used, shared or created, and is it necessary?
  • What privacy and security risks could affect your business?
  • Who will be able to access the data you provide, or the outputs AI creates?
  • Where will the data be stored, and will it be used or shared outside your business?
  • Does the provider clearly explain how the system works and how they handle your data?

Maintain security after you start using the AI system. Regular reviews, staff training and monitoring help ensure the system stays appropriate and compliant.

Protect your data from cyber security risks

AI systems rely on the same standard security controls as other digital systems. This includes secure access, strong authentication, system monitoring and incident response.

Keep your data secure at every stage – from sourcing and storing to sharing and deleting.

Visit the Australian Signals Directorate’s Australian Cyber Security Centre to learn more about artificial intelligence for small business.

For system owners, read their guidance on AI data security.

Manage privacy risks

AI systems can create new or increased privacy risks for people.

Personal information is anything that could identify someone, such as:

  • names
  • dates of birth
  • email addresses and phone numbers
  • financial details
  • photos or recordings.

If an AI system uses personal information, privacy obligations apply.

How privacy obligations apply

If you use personal information in or from an AI system, you must comply with the Australian Privacy Principles. This includes any details the system creates or infers that are incorrect or made up.

These rules apply whether the system is built inhouse or supplied by a third party.

To understand your obligations, check the latest advice from the Office of the Australian Information Commissioner (OAIC). They are the national regulator for privacy and freedom of information.  

Read their guidance on privacy and the use of commercially available AI products.

Why this matters

AI systems rely on large volumes of data. If that data is sensitive, insecure or compromised, it can affect how the AI system behaves and increase legal, privacy and cyber risks.

Good data security practices help ensure your data:

  • has not been tampered with
  • is free from malicious or unauthorised content
  • does not contain unnecessary duplication or anomalies
  • is handled consistently across the AI system lifecycle.

AI data security depends on strong, underlying cyber security controls for all systems that store, process or connect to your data.

Getting ready to start an AI project

Before using AI tools, check the following.

  1. Know what data will be used. Identify what business or personal data the AI tool will access, where it comes from and why it is needed.
  2. Put basic security and privacy controls in place. Check that your systems, processes and staff practices protect data from unauthorised access, use or sharing.
  3. Assess risks across the AI lifecycle. Consider risks at design and throughout, build, testing and ongoing operation of the AI system.
  4. Use trusted advice. Refer to Australian Government guidance on AI data security and privacy to understand your responsibilities and good practice.