Exploring uncertainty (activity)

Work through uncertainty together and agree on next steps for using artificial intelligence

45 - 60 minutes | Team discussion | Low effort

Use this activity to help your team talk openly about AI. Create space for concerns, questions and different opinions. Open conversations with your people can support teams to identify risks and opportunities and share concerns.

Supporting teams to share their views is an important way to build trust in AI adoption and reduce uncertainty. These conversations help your team build shared understanding and agree on sensible ways to use AI.

This activity supports our guidance on how to involve your team early and navigate change. Start this activity before moving into practical learning, team planning or safe experimentation.

Why this matters

Uncertainty about AI is normal. When leaders don't create a culture of open discussion, teams may not talk about concerns, and people may make assumptions. They may not raise legitimate concerns and could avoid asking questions or experiment without clear support.

Talking about uncertainty and concerns helps teams:

  • move from uncertainty and shared understanding
  • separate real risks from assumptions
  • identify potential risks to the organisation
  • identify where the team needs support or guidance
  • understand the responsibilities of leaders
  • agree on practical approaches.

This activity can help your team make sense of uncertainty and prepare for change.

It doesn’t replace:

  • building practical AI skills
  • documented risk management processes
  • an accountability framework
  • testing low-risk ways to use AI 
  • setting clear rules for AI use
  • ongoing support and follow-up
  • understanding and complying with WHS legislation
  • formal major change consultation processes defined under the Fair Work Act.

How to run this activity

Work through the steps as a team discussion. Keep the conversation grounded in your team’s real work, current concerns and readiness.

This activity works best when people feel invited, not exposed.

Set clear expectations before you start. For example:

  • participation is voluntary
  • people's voices are valued
  • how things that are raised might influence outcomes or processes
  • people may speak, write, listen or pass
  • emotional, practical and structural concerns are all valid
  • this is a space to understand concerns, not debate them
  • not everything needs to be resolved during the session.

If you’re leading the session and also feel uncertain, it’s okay to say that. Being honest about uncertainty builds trust when it’s paired with clarity about what happens next.

Step 1: Set the context

Time: 5-10 minutes

Open the conversation in plain language.

For example:

AI is starting to affect how work gets done, but not always in clear or consistent ways. Today is about understanding what’s coming up for us as a team, so we can build clarity and decide a sensible next step.

If useful, add one or 2 points for context, such as:

  • AI is more likely to change tasks than remove entire roles
  • people are already using AI informally at work
  • support or guidance may be uneven.

Pause for reflection before inviting discussion.

Step 2: Name the concerns

Time: 20 minutes

Invite people to raise concerns without judgement.

Use prompts such as:

  • which concerns sound familiar?
  • what have you heard other people say?
  • which concerns feel emotional?
  • which concerns feel practical?

Common concerns may include:

  • losing my job or parts of my role to AI
  • appearing less capable by using AI
  • not understanding how it works
  • breaking policy or doing something unsafe
  • feeling that AI is irrelevant to my role
  • falling behind by ignoring AI.

When your team raises concerns, acknowledge it and avoid correcting. The goal is to make concerns visible and shared.

Step 3: Understand the concerns

Time: 20 minutes

Once concerns are visible, start looking for patterns.

Group common patterns into themes such as:

  • role change and job security
  • confidence and capability
  • safety, policy and governance
  • relevance and scepticism.

Ask your team which concerns point to something they need more clarity, guidance or support on. This helps separate personal anxiety from system, policy or leadership gaps. 

At this stage, progress looks like better shared understanding, not certainty.

Step 4: Choose one small, sensible next step

Time: 15 minutes

Choose one low-risk action your team can do after the activity. Focus on something that’s achievable and within your team’s control.

For example:

  • clarify which AI tools are allowed at work
  • identify one low‑risk task for experimentation
  • create a shared learning space
  • raise policy questions with leadership
  • request training or clearer guidance.

The goal is thoughtful progress, not speed. A small visible action helps your team feel like they’re allowed to move forward safely.

Closing reflection

Time: 5-10 minutes

Close the session with reflection rather than resolution.

Ask:

  • what did we hear that surprised us?
  • which concerns feel clearer now?
  • what still feels uncertain?
  • what should we come back to?

Let your team know when and how this conversation will be revisited. For example, after new guidance, training or decisions are made. Re‑opening the discussion signals that uncertainty is being managed, not ignored.

References

This activity draws on reputable Australian sources, including:

  • Jobs and Skills AustraliaOur Gen AI Transition: Implications for Work and Skills
  • Parliamentary Library (Australian Parliament)Potential impact of AI on the Australian workforce
  • National Artificial Intelligence CentreGuidance for AI Adoption
  • Harvard Business Review Your team is anxious about AI. Here’s how to talk to them about it.
  • Technology Council of Australia – national worker attitude surveys
  • PwC AustraliaAI Jobs Barometer

Claims in this resource are intentionally cautious and reflect current Australian evidence rather than forecasts.