With the widespread adoption of AI models and systems in the business and commercial sectors, and the rapid evolution of their capabilities and applications, governments and legislators worldwide are taking swift action to establish regulatory controls on the use of AI. These measures aim to identify, mitigate, and oversee privacy and related risks associated with AI models and systems before they can cause significant harm to individuals. This proactive global response to AI is characterized by a concerted effort to strike a delicate balance between technological innovation, business potential, individual rights, and the broader societal good.
Governments and regulatory bodies are not hesitating to take action when AI models or systems become the center of controversy. Following are some of the examples of regulatory actions targeted AI developers and deployers:
Clearview AI
Clearview AI, a US company which developed an AI facial recognition algorithm based on photos scrapped from social media websites, was recently fined almost $8 million by the UK’s Information Commissioner’s Office for collecting personal data from the internet without obtaining consent of the data subjects. Similarly, the Italian data protection authority fined the company $21 million for committing breach of data protection rules. The authorities in Australia, Canada, France, and Germany have also taken similar enforcement actions against the company.
In the United States, through a lawsuit brought by the American Civil Liberties Union (ACLU) under the Illinois’s Biometric Information Privacy Act (BIPA), Clearview AI consented to stop selling its AI facial recognition algorithm system in the United States to most businesses and private firms across the U.S. The company also agreed to stop offering free trial accounts to individual police officers, which had allowed them to run searches outside of police departments’ purview.
Replika AI
The Italian data protection authority banned the Replika app, an AI chatbot developed by Luka Inc., from processing personal data of the Italian users. The company was also issued a warning to face a fine of up to 20 million euros or 4% of the annual gross revenue in case of non-compliance with the ban. The reasons for the ban cited by the regulatory authority included concrete risks for minors, lack of transparency, and unlawful processing of personal data.
ChatGPT
ChatGPT, a large language model-based chatbot developed by OpenAI, was banned by the Italian data protection authority and was only allowed to resume its operation once it established controls to comply with the GDPR provisions related to privacy notice, legal bases for data collection, and the data subject rights. Further, data protection authorities in Canada, Spain, Germany, and Netherlands have also initiated or have shown intention to initiate investigation proceedings to check the chatbot’s compatibility with data protection laws.
Thus, while the potential profitability of developing, using and deploying AI solutions is undeniable for global businesses due to the promised enhanced efficiency, unprecedented insights, and transformative growth offered by the technology, the regulatory landscape surrounding AI remains a tumultuous frontier, where vague legal frameworks and evolving global standards developing in real time create a unique compliance challenge and a risky business environment filled with potential liabilities. Thus, in this unmapped and uncharted landscape, businesses are confronted by the imperative to work hard to be the first to develop and deploy this game changing technology while navigating the regulatory maze carefully so as not to risk massive liabilities. Therefore, in such a pivotal juncture, the value of gaining insights into the regulatory obligations envisioned by global regulators cannot be overstated.
Regulatory Compliance Regime for AI
The AI regulatory compliance regime is evolving rapidly and varies from one country/region to another. So far, a number of countries have introduced and are in process of finalizing their comprehensive AI laws e.g., the European Union, Brazil, Canada, Japan, Singapore etc. Just like the General Data Protection Regulation (GDPR), the European Union’s AI Act is leading the way of comprehensive AI regulations and is expected to come into force by the end of 2023. Once enacted, the AI laws will require the business developing and deploying different types of AI to comply with a mammoth of compliance obligations.
Some of the global AI regulations are:
In addition to AI regulations, various regulatory bodies have issued guidelines and compliance frameworks on AI such as the following:
Since generative AI is the most proliferating type of AI and relies on huge amounts of data for the training and fine-tuning of its models, the businesses dealing in generative AI may also be under an obligation to comply with applicable data protection laws due to the use of personal data within the AI system. For example, in the US, if a company is using its generative AI model as a chatbot in a videogame or other online service directed at children, the company must fulfill certain requirements under the Children’s Online Privacy Protection Act of 1998 in relation to children’s personal data. These requirements include providing direct notice and obtaining affirmative consent from the children’s parents before collecting and using children’s personal data. Similarly, the use of different types of AI for different purposes may be subject to various sectoral laws, guidance issued by the regulatory bodies, etc.
Considering the complex web of regulatory obligations, the business must take a proactive approach towards compliance to safeguard against potential liabilities and avail the unprecedented opportunities for growth and innovation offered by AI.
Based on the existing and upcoming laws and regulations, following are some of the primary compliance obligations and best practices for the businesses developing and deploying the AI:
Assessments
1 | AI Classification Assessments | Organizations must be able to assess the class and category of the AI systems to identify the applicable regulatory compliance obligations. AI systems will be subject to different regulatory requirements depending on the risks. |
2 | AI Training Data Assessment | Organizations must assess that the training data is subject to appropriate governance measures and management practices. |
3 | AI Conformity Assessment | Organizations must ensure that the AI system undergoes relevant conformity assessment as per the applicable laws and regulations before being placed on the market or put into use. |
4 | AI System Cybersecurity Assessment | Organizations must assess that appropriate technical solutions are in place to ensure cybersecurity of the AI system. |
5 | AI related DPIA | Organizations must assess and identify the privacy risks posed by AI systems to data subjects and society and document and apply mitigation measures to reduce the identified risks. A DPIA is required for high-risk data processing activities. |
6 | Algorithmic Impact Assessment | Organizations must assess and identify the risks (other than privacy) posed by AI systems to data subjects and society and document and apply mitigation measures to reduce the identified risks. |
7 | AI Bias Assessments | Organizations must be able to assess AI systems for any inherent bias in their decisions/outputs by conducting equity assessments. |
8 | AI Provider Assessment | While importing an AI system on the market, organizations must ensure that the provider of the AI system has drawn up appropriate technical documentation as per the applicable laws and regulations. |
Disclosures
1 | Disclosure of the use of data for AI | The privacy notices of the organizations must inform data subjects if their personal data will be used in any AI system. |
2 | Disclosure of the logic of AI system | The privacy notices of the organizations must explain the logic of the AI system, the factors relied on by the AI system in making the decision. |
3 | Disclosure of the rights of data subjects in reference to AI | Organizations must also inform the data subject about their rights in relation to their personal data e.g., right to access, right to deletion, right to object/ opt-out etc. |
4 | Notification of High Risks associated with the AI System | Organizations must immediately notify the relevant entities about the high risks that an AI system presents to the health, safety, and protection of fundamental rights of the persons. |
5 | Notification of Serious Incidents or Malfunctions | Organizations must immediately notify the relevant entities about any serious incidents or malfunctions that constitute breach of obligations to protect fundamental rights. |
6 | Instructions of Use | Organizations must ensure that the AI system is accompanied by appropriate, accessible, and comprehensive instructions of use to enable the operation of the AI system transparent for the users. |
7 | Conformity Marking | Organizations must affix the conformity marking of the AI system to the accompanying documentation or in any other manner, as appropriate, in compliance with the applicable laws and regulations. |
8 | AI System Interaction Disclosure | Organizations must ensure that the natural persons are informed of the fact that they are interacting with an AI system. |
9 | AI System Operation Disclosure | If an organization uses an emotion recognition system or a biometric categorisation system, it must inform the natural persons exposed to these systems about their operation. |
10 | Artificially Generated/ Manipulated Content Disclosure | If an organization generates deep fakes, it must disclose that the content is artificially generated or manipulated. |
Consent
1 | Informed Consent | If an organization relies on consent as a legal basis to use personal data of data subjects for/ by an AI system, it must obtain informed consent of the data subjects for processing their personal data. |
2 | Right to object/ opt-out of personal data processing in context of AI systems | Data subjects must be provided with an opportunity to object to/ opt-out of the processing of personal data for/ by the AI system, including profiling. |
Data Subject Rights
1 | Right to object to/ opt-out of personal data processing in context of AI systems | Data subjects must be provided with an opportunity to object to/ opt-out of the processing of personal data for/ by the AI system, including profiling. |
2 | Right to appeal automated decision | Data subjects must be provided with an opportunity to appeal any automated decision making and ask for a human review. |
3 | Right to access in context of AI systems | Data subjects must be provided with an opportunity to access their personal data being used for/ by AI systems. |
4 | Right to correction in context of AI systems | Data subjects must be provided with an opportunity to rectify the inaccurate personal data used for/ by the AI system |
5 | Right to delete in context of AI systems | Data subjects must be provided with an opportunity to have their personal data deleted from AI systems and any other database which will be used for/ by an AI system. |
6 | Right to data portability in context of AI systems | Data subjects must be provided with an opportunity to receive personal data in a structured and machine-readable format and to transmit the data to another organization. |
Security
1 | Data Security | Organizations must protect personal data being used by the AI system through technical measures. |
2 | System Security | Organizations must protect the AI system from unauthorized access, manipulation by bad actors through technical measures. |
3 | Internal and Environmental Resilience | Organizations must ensure safety of the AI system from errors, faults, or inconsistencies within the system or the environment in which it operates. |
4 | Redundancy Back ups and Failsafes | Organizations must ensure robustness of the AI system through technical redundancy solutions, including back up or fail-safe plans. |
5 | Data Poisoning Protection | Organizations must ensure to have technical solutions for protecting the AI system against attacks trying to manipulate the training datasets (data poisoning). |
6 | Adversarial Examples Protection | Organizations must ensure to have technical solutions for protecting the AI system from attacks involving inputs designed to cause the system to make a mistake (adversarial examples). |
7 | Model Flaws Protection | Organizations must ensure to have technical solutions for protecting the AI system from attacks involving inputs designed to exploit model flaws. |
Governance
1 | AI System Classification | Organizations are required to classify their AI systems based on their purposes and the level of risk posed by them. |
2 | AI System Documentation | Organizations are required to draw up and keep up-to-date technical and other important documentation of the AI system. |
3 | AI Logic Audit | Organizations are required to document and monitor the AI system’s logic and factors that it uses to achieve end results. |
4 | AI System Data Mapping | Organizations should be able to map the data assets, processes, vendors and third parties involved with the AI system. |
5 | Quality Management System | Organizations must put in place and document a quality management system to ensure compliance with applicable laws and regulations. |
6 | AI Risk Register | Organizations must establish, document, and implement a risk management system to evaluate the known and foreseeable risks associated with the AI system and take appropriate mitigation measures. |
7 | Data Governance Controls | Organizations must ensure that data/personal data being used in AI system adheres to principles of data minimization, purpose specification, and data retention. |
8 | Training Data Controls | Organizations need to be able perform certain operations on the data/personal data being used to train the AI System (bias removal, anonymization). |
9 | AI Output Filters | Organizations need to be able to monitor output results in real-time to detect any release of personal data in the output results. |
10 | ROPA Reports | Organizations must be able to audit and demonstrate to regulators the use of assets, data/personal data, processes and vendors used by AI systems. |
11 | Human-Machine Interface/ Oversight Tools | Organizations must design and develop AI systems with appropriate human-machine interface tools to enable effective human oversight. |
12 | Operational Monitoring System | Organizations must be able to actively monitor the operation of the AI system throughout its lifecycle to ensure regulatory compliance. |
13 | Feedback Loop Monitoring | Organizations must be able to monitor feedback loops and take appropriate measures. |
14 | Algorithm Deprecation/ Disgorgement | Organizations must be able to retain versions of the AI system to be able to deprecate/claw back/disgorge the AI algorithm by removing illegal data and the learning obtained from it. |
15 | AI Event Logs | Organizations must keep the event logs for an AI system and must be able to provide access to the regulatory authority to these logs upon request. |
16 | Declaration of Conformity | Organizations must draw up a declaration of conformity for their AI systems to demonstrate compliance with the applicable laws and regulations. |
17 | Registration of AI System | Organizations must register their AI systems with the relevant databases as per the requirements of the applicable laws and regulations |
[email protected]
Securiti, Inc.
300 Santana Row
Suite 450
San Jose, CA 95128