Being clear when you use AI to create or change content helps people understand where the information comes from, and how much they can rely on it.
Content is any online or printed material that may include text, images, video or audio.
Being clear when you use AI to create or change content helps people understand where the information comes from, and how much they can rely on it.
Content is any online or printed material that may include text, images, video or audio.
AI-generated content is becoming common across marketing, customer communications, research and media. Sometimes it can be hard to tell whether a person or AI created the content.
Laws on misleading information, privacy and online safety may apply to AI-generated content.
Being transparent about AI use can help you:
There are a few ways to disclose when you use AI to create or change content.
You may not need to disclose it every time – the right approach depends on your context. This includes the potential impact of the content and how much AI you used to create it.
The following is a summary of key actions you can take.
Start by considering how the content could affect people.
Use clearer or more visible methods when AI-generated content could influence decisions. Also use them when it could affect people’s rights, safety or trust. This may apply to content for:
Choose how you disclose AI use based on the content impact and how much AI contributed to creating it. Be transparent when AI:
Even small edits can change meaning, such as the difference between ‘did’ and ‘did not’.
Common methods to tell people when you’ve used AI include:
Content Credentials are an emerging standard that help verify where content comes from. Check the Australian Signals Directorate’s Australian Cyber Security Centre advice on Content Credentials.
You can use these methods on their own or together. Content with higher impact and AI involvement may need more than one method. Content with low impact and limited AI use might need minimal or no disclosure.