AI risks fundamentally change depending on the type and complexity of your AI systems. Risks often emerge from how the AI system behaves in different situations and use-cases, rather than only from software updates. They can rapidly amplify smaller issues into significant problems.
For example, an AI chatbot that answers simple questions during business hours, when it can be monitored by a staff member, is a low-risk use of AI. The risks expand, however, if that chatbot operates 24/7, without human oversight, and answers more complex questions.
To use AI responsibly, organisations need to be able to identify and manage its risks.
Getting started
- Create a risk screening process to identify and flag AI systems and use cases that pose unacceptable risk or require additional governance attention. Look at our risk screening template for help with this process.
Next steps
- Set up risk management processes that account for the differences between traditional IT, narrow AI, general purpose AI and agentic AI systems.
- Conduct risk assessments and create mitigation plans for each specific use case and identified impacts in that context.
- Apply risk controls based on the level of risk for each of your specific uses of AI.
- Create processes to investigate, document and analyse AI-related incidents. Make sure you use lessons learned to prevent incidents happening again and to improve AI systems and risk management processes.
For more details, check the implementation guidance on how to measure and manage risks.