Be clear about AI-generated content

Understand when and how to disclose your use of artificial intelligence

Being clear when you use AI to create or change content helps people understand where the information comes from, and how much they can rely on it.

Content is any online or printed material that may include text, images, video or audio. 

Why this matters

AI-generated content is becoming common across marketing, customer communications, research and media. Sometimes it can be hard to tell whether a person or AI created the content.

Laws on misleading information, privacy and online safety may apply to AI-generated content.

Being transparent about AI use can help you:

  • build trust – people can rely on the source when they know who or what created the content
  • reduce risks – clear information helps you meet your legal obligations and avoid causing harm through misleading content
  • support public education – people can better recognise and judge whether content is authentic
  • gain competitive advantage – people are more likely to engage when they know you’re being transparent. 

How to be transparent

There are a few ways to disclose when you use AI to create or change content.

You may not need to disclose it every time – the right approach depends on your context. This includes the potential impact of the content and how much AI you used to create it.

The following is a summary of key actions you can take. 

Consider the impact

Start by considering how the content could affect people.

Use clearer or more visible methods when AI-generated content could influence decisions. Also use them when it could affect people’s rights, safety or trust. This may apply to content for:

  • clinical or health-related contexts
  • recruitment or hiring processes
  • financial or legal information
  • public communications.

Check how much was AI-generated

Choose how you disclose AI use based on the content impact and how much AI contributed to creating it. Be transparent when AI:

  • created most of the content
  • changed existing content
  • changed the potential meaning of the information.

Even small edits can change meaning, such as the difference between ‘did’ and ‘did not’.  

Add clear signs

Common methods to tell people when you’ve used AI include:

  • labelling – add visible text that says how much content was created or changed by AI
  • watermarking – add a semi-transparent mark to images, video or audio
  • metadata recording – add details in the file about who created the content and its source.

Content Credentials are an emerging standard that help verify where content comes from. Check the Australian Signals Directorate’s Australian Cyber Security Centre advice on Content Credentials.

You can use these methods on their own or together. Content with higher impact and AI involvement may need more than one method. Content with low impact and limited AI use might need minimal or no disclosure.