2.1.1 Identify and document key types of stakeholders (such as employees and end users) that may be impacted by the organisation’s development and deployment of AI, and their needs.
2.1.2 Prioritise, select and document which stakeholder needs will be addressed in organisational policies and procedures.
2.1.3 Document and communicate the organisation’s commitment to preventing harms to people from AI models and systems and upholding diversity, inclusion and fairness.
2.1.4 Document the scope for each AI system, including intended use cases, foreseeable misuse, capabilities, limitations and expected context.
2.1.5 For each AI system, engage stakeholders to identify and document the potential benefits and harms to different types of stakeholders, including:
- impacts to vulnerable groups
- risks of unwanted bias or discriminatory outputs
- use of an individual’s personal information
- where the system makes or influences a decision about a person or group of people.
2.1.6 For every documented risk of harm to affected stakeholders, conduct appropriate stakeholder impact analysis.
2.1.7 Monitor for potential harms by engaging affected stakeholders for each AI system on an ongoing basis to identify new stakeholders, including end users throughout the AI lifecycle.
2.1.8 Create processes to support ongoing engagement with stakeholders about their experience of AI systems. Identify vulnerable groups and support appropriately. Equip stakeholders with the skills and tools necessary to give meaningful feedback.
2.2 Establish feedback and redress processes