AI Governance refers to the imposition of frameworks, rules, standards, and legal requirements that protect an individual’s fundamental rights including data privacy rights by governing and managing the use of AI systems. Governments all around the world are increasingly drafting specific regulations governing the use of AI. However, AI systems are also subject to existing data protection laws if those AI systems involve the processing of personal data. Personal data is any information or piece of information relating to an identified or identifiable natural person.
Some of the key data protection obligations that AI systems may be subject to under the existing data protection framework include the following:
Most data privacy laws require privacy impact assessments and data protection impact assessments in the case of high-level data processing activities. AI risk assessments complement such assessments and are essential in order to determine the level of the risk caused by the use of the AI system in line with data protection implications. Such assessments must be conducted before the implementation of the AI system.
Most data privacy laws require businesses to adopt security measures in order to protect personal data.
Most data privacy laws require businesses to ensure user transparency. In the context of the AI, users must be informed if they are interacting with an AI system as well as the logic of the AI system.
Most data privacy laws require businesses to allow users to obtain human intervention and opt-out/object to data processing for automated decision-making or contesting the decision.
Most data privacy laws require businesses to determine an appropriate legal basis for the processing of personal data.
Most data privacy laws require businesses to ensure data minimization and purpose limitation in relation to the processing of personal data.
Most data privacy laws require businesses to ensure the integrity of personal data by keeping it accurate at all times.
The mapping of existing as well as upcoming regulatory obligations provides a roadmap for businesses to understand the compliance expectations of major global regulators from AI developers and deployers. Businesses must begin to develop technical capabilities, policies and procedures to ensure they can continue to develop and use AI systems and models while avoiding potential legal pitfalls which may arise in the future.