AI transparency statement

Learn how we manage and support our own use of artificial intelligence

Being clear about when and how AI is being used by organisations is an important way to build trust with stakeholders, as well as being able to identify and manage AI risks.

The National AI Centre (NAIC) works with organisations to implement practices to increase AI adoption, innovate safely and secure the benefits. 

We aim to set a positive example by following best practices in our use of AI and maintaining transparency throughout our work.  

AI transparency statement

The Department of Industry, Science and Resources is supporting Australia to become a leader in developing and adopting trusted, secure and responsible AI.  

The NAIC follows the department’s approach to AI governance and adoption, as described in the department’s AI transparency statement

This statement details how we engage with AI in a safe and responsible way, in accordance with the Policy for the responsible use of AI in government (Version 2.0).

Our adoption of AI also aligns to the Guidance for AI Adoption, which consists of 6 essential practices for industry to adopt AI responsibly. This includes transparency and accountability practices for developers and deployers of AI systems.

Content that has been modified or generated using AI

NAIC is transparent about how and when AI is used to generate content.

This is an area of focus for the NAIC, having published guidance for industry about how to be transparent when using AI to generate or modify content.

Our approach follows both this guidance and the watermarking and labelling guidance contained within the Technical standard for government’s use of AI

All content we generate, modify or enhance using AI should include a visible label. Labels need to reflect the extent of AI that has been involved in generating content: 

  • AI-assisted 
  • AI-modified 
  • AI-generated. 

The level of detail in labels should reflect the potential impact of the content. 

For lower impact content, labels will be simple. 

For higher impact content, the National AI Centre will be more specific about how AI was used and the role of the team in review and oversight. 

AI-assisted 

AI is used to assist team members in minor ways. For example:  

  • spelling and grammar checks  
  • automatic photo touch-ups like removing red eyes.  

Example lower impact:  

  • This article/video was produced with AI-assistance.  

Examples higher impact:  

  • This guidance was produced with AI assistance to edit text, improve readability and accessibility. Full editorial review and editorial control remain with the team at the National AI Centre.  

AI-enhanced  

AI is used to modify or refine content through inputs or instructions entered by team members into the AIsystem. For example: 

  • fact‑checking or editing a complex document with substantial rewrites 
  • removing specific details from the background of images, such as logos. 

Example lower impact: 

  • This article was enhanced by AI. 

Example higher impact: 

  • This tool was enhanced by AI to generate early prototypes for testing and refinement by experts. 

AI-generated   

Through a simple action, like a prompt, or uploading a file, a complete output is generated by team members, with little to no human oversight. For example: 

  • creating an interview‑style video from a still image and a text script 
  • creating a new poster artwork from a verbal prompt. 

Example lower impact: 

  • This social post was generated by AI and reviewed by our team. 

We do not produce content with high impact without significant human oversight. 

Review

These guidelines will be reviewed on a half-yearly basis and updated to align with best practice.