A risk in the context of AI refers to any potential adverse impact, such as biased decision-making or privacy breaches, along with the likelihood of such impacts occurring. For instance, an AI system used in hiring might inadvertently favor certain demographics, posing ethical and legal risks. With the rapid increase in the use of AI technology, businesses all around the world are adopting AI risk management frameworks that aim to address the risks caused by an AI system and mitigate those risks. Without an effective AI risk management framework in place, businesses could face serious problems like legal issues or losing customers’ trust due to poorly handled data. This shows how important it is to properly identify and manage risks in AI.
An effective AI risk management framework typically involves key steps such as risk identification, assessment, mitigation, and continuous monitoring. Understanding these steps is vital for organizations to effectively manage the complexities and potential threats posed by AI systems.
Key AI risk management frameworks such as NIST AI Risk Management Framework, ICO AI Risk Toolkit, and Singapore Model AI Governance Framework help organizations manage AI-related risks. The NIST framework is especially notable for promoting trustworthy and responsible AI, and is known as the go to standard for AI risk management. Organizations adopt an AI risk management framework or integrate it in their privacy risk frameworks to address the harms, vulnerabilities or the risks resulting from the use of the AI. Following key steps in AI risk management frameworks are crucial for an effective risk management:
This step involves identifying the purpose of the AI system, what and how data flows through the AI system and is processed by it. It is this stage that helps organizations identify the relevant AI actors and applicable privacy laws and compliance obligations. Businesses must identify the risk model at this stage depending on the jurisdiction they are based in and the type of their business.
This step requires businesses to assess and analyze the risks. It is this stage of the AI risk management framework where businesses can rate or assign a risk score to the identified risk (high, low, medium) caused or is likely to be caused by the AI system. The level of the risk depends on the type of personal data that will be violated and a few other factors such as the consequences or adverse impacts it is causing to the individuals. For example, any AI system that causes a clear threat to the safety, livelihood or fundamental rights of individuals constitutes a high-risk AI system. Similarly, AI systems that are involved with the processing of sensitive personal data or children’s personal data may also constitute high-risk AI systems. On the other hand, an AI enabled video game having no adverse consequences for the individuals is a low-risk AI system.
This step involves identifying the response required in order to address and manage the risk. The response may vary depending on the type and level of the identified risks. For example, a risk may be transferred to another entity that is more capable of handling or addressing it in a particular situation. Or, effective controls may be applied by the organization in order to address the risk.
This step requires businesses to adopt and implement specific controls that reduce the risk. These controls can be administrative, technical or physical in nature depending on the type of the identified risks. For example, in order to address the risks of potential biases, businesses may consider having sensitisation protocols in place that can reconcile historical biases as well as technical tools in place in order to diagnose fairness harms and address them.
AI systems may evolve over time as these are information systems and the risks attached to the AI system may also evolve over time. Therefore, it is highly important for businesses to continually monitor the ongoing usage and performance of the AI system in order to identify the risks. For example, this step would require businesses to keep logs or automatic records of personal data processing as well as mechanisms that ensure that individuals are informed where there is a change in the use of their data in order to avoid the violation to the purpose limitation data protection principle. Similarly, obsolete data will be cleaned up time to time in order to maintain the accuracy of the data.