The Regulatory Response To AI

Course content

Create Account

Log in / Create account to save progress and earn badges

The Regulatory Response To AI
View course details →

With the widespread adoption of AI models and systems in the business and commercial sectors, and the rapid evolution of their capabilities and applications, governments and legislators worldwide are taking swift action to establish regulatory controls on the use of AI. These measures aim to identify, mitigate, and oversee privacy and related risks associated with AI models and systems before they can cause significant harm to individuals. This proactive global response to AI is characterized by a concerted effort to strike a delicate balance between technological innovation, business potential, individual rights, and the broader societal good.

Governments and regulatory bodies are not hesitating to take action when AI models or systems become the center of controversy. Following are some of the examples of regulatory actions targeted AI developers and deployers:

Clearview AI

Clearview AI, a US company which developed an AI facial recognition algorithm based on photos scrapped from social media websites, was recently fined almost $8 million by the UK’s Information Commissioner’s Office for collecting personal data from the internet without obtaining consent of the data subjects. Similarly, the Italian data protection authority fined the company $21 million for committing breach of data protection rules. The authorities in Australia, Canada, France, and Germany have also taken similar enforcement actions against the company. 

In the United States, through a lawsuit brought by the American Civil Liberties Union (ACLU) under the Illinois’s Biometric Information Privacy Act (BIPA), Clearview AI consented to stop selling its AI facial recognition algorithm system in the United States to most businesses and private firms across the U.S. The company also agreed to stop offering free trial accounts to individual police officers, which had allowed them to run searches outside of police departments’ purview.

Replika AI

The Italian data protection authority banned the Replika app, an AI chatbot developed by Luka Inc., from processing personal data of the Italian users. The company was also issued a warning to face a fine of up to 20 million euros or 4% of the annual gross revenue in case of non-compliance with the ban. The reasons for the ban cited by the regulatory authority included concrete risks for minors, lack of transparency, and unlawful processing of personal data.

ChatGPT

ChatGPT, a large language model-based chatbot developed by OpenAI, was banned by the Italian data protection authority and was only allowed to resume its operation once it established controls to comply with the GDPR provisions related to privacy notice, legal bases for data collection, and the data subject rights. Further, data protection authorities in Canada, Spain, Germany, and Netherlands have also initiated or have shown intention to initiate investigation proceedings to check the chatbot’s compatibility with data protection laws.

Thus, while the potential profitability of developing, using and deploying AI solutions is undeniable for global businesses due to the promised enhanced efficiency, unprecedented insights, and transformative growth offered by the technology, the regulatory landscape surrounding AI remains a tumultuous frontier, where vague legal frameworks and evolving global standards developing in real time create a unique compliance challenge and a risky business environment filled with potential liabilities. Thus, in this unmapped and uncharted landscape, businesses are confronted by the imperative to work hard to be the first to develop and deploy this game changing technology while navigating the regulatory maze carefully so as not to risk massive liabilities. Therefore, in such a pivotal juncture, the value of gaining insights into the regulatory obligations envisioned by global regulators cannot be overstated.

Regulatory Compliance Regime for AI

The AI regulatory compliance regime is evolving rapidly and varies from one country/region to another. So far, a number of countries have introduced and are in process of finalizing their comprehensive AI laws e.g., the European Union, Brazil, Canada, Japan, Singapore etc. Just like the General Data Protection Regulation (GDPR), the European Union’s AI Act is leading the way of comprehensive AI regulations and is expected to come into force by the end of 2023. Once enacted, the AI laws will require the business developing and deploying different types of AI to comply with a mammoth of compliance obligations.

Some of the global AI regulations are:

  • Canada Bill C-27 (AIDA) (under consideration with the Standing Committee on Industry and Technology)
  • New York Local Law No.144 (Law 144) (Enforcement began from 5 July 2023)
  • California Senate Bill 313 (pending with Senate Appropriations Committee for hearing)
  • Brazil Draft AI Law (under consideration)
  • EU AI Act (expected to come into effect in 2023)
  • Shanghai AI Regulation (came into effect on 1st October 2022)

In addition to AI regulations, various regulatory bodies have issued guidelines and compliance frameworks on AI such as the following:

  • NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0)
  • UK ICO’s AI and Data Protection Risk Toolkit
  • Singapore Infocomm Media Development Authority AI testing toolkit
  • Australian NSW AI Assurance Framework 
  • European Commission guidelines on Ethical Use of Artificial Intelligence in educational settings
  • French DPA Self-Assessment Guide for AI systems
  • Spanish DPA Guide on machine learning
  • China Cyberspace Administration draft policy on Measures on the Management of Generative Artificial Intelligence
  • India Council of Medical Research Guidelines on the use of AI in biomedical research and healthcare
  • Vietnam draft National Standard on Artificial Intelligence and Big Data

Since generative AI is the most proliferating type of AI and relies on huge amounts of data for the training and fine-tuning of its models, the businesses dealing in generative AI may also be under an obligation to comply with applicable data protection laws due to the use of personal data within the AI system. For example, in the US, if a company is using its generative AI model as a chatbot in a videogame or other online service directed at children, the company must fulfill certain requirements under the Children’s Online Privacy Protection Act of 1998 in relation to children’s personal data. These requirements include providing direct notice and obtaining affirmative consent from the children’s parents before collecting and using children’s personal data. Similarly, the use of different types of AI for different purposes may be subject to various sectoral laws, guidance issued by the regulatory bodies, etc.

Considering the complex web of regulatory obligations, the business must take a proactive approach towards compliance to safeguard against potential liabilities and avail the unprecedented opportunities for growth and innovation offered by AI. 

Based on the existing and upcoming laws and regulations, following are some of the primary compliance obligations and best practices for the businesses developing and deploying the AI:

Assessments

1AI Classification AssessmentsOrganizations must be able to assess the class and category of the AI systems to identify the applicable regulatory compliance obligations. AI systems will be subject to different regulatory requirements depending on the risks.
2AI Training Data AssessmentOrganizations must assess that the training data is subject to appropriate governance measures and management practices.
3AI Conformity AssessmentOrganizations must ensure that the AI system undergoes relevant conformity assessment as per the applicable laws and regulations before being placed on the market or put into use.
4AI System Cybersecurity AssessmentOrganizations must assess that appropriate technical solutions are in place to ensure cybersecurity of the AI system.
5AI related DPIAOrganizations must assess and identify the privacy risks posed by AI systems to data subjects and society and document and apply mitigation measures to reduce the identified risks. A DPIA is required for high-risk data processing activities.
6Algorithmic Impact Assessment Organizations must assess and identify the risks (other than privacy) posed by AI systems to data subjects and society and document and apply mitigation measures to reduce the identified risks.
7AI Bias AssessmentsOrganizations must be able to assess AI systems for any inherent bias in their decisions/outputs by conducting equity assessments. 
8AI Provider AssessmentWhile importing an AI system on the market, organizations must ensure that the provider of the AI system has drawn up appropriate technical documentation as per the applicable laws and regulations.

Disclosures

1Disclosure of the use of data for AIThe privacy notices of the organizations must inform data subjects if their personal data will be used in any AI system.
2Disclosure of the logic of AI systemThe privacy notices of the organizations must explain the logic of the AI system, the factors relied on by the AI system in making the decision.
3Disclosure of the rights of data subjects in reference to AIOrganizations must also inform the data subject about their rights in relation to their personal data e.g., right to access, right to deletion, right to object/ opt-out etc.
4Notification of High Risks associated with the AI SystemOrganizations must immediately notify the relevant entities about the high risks that an AI system presents to the health, safety, and protection of fundamental rights of the persons.
5Notification of Serious Incidents or MalfunctionsOrganizations must immediately notify the relevant entities about any serious incidents or malfunctions that constitute breach of obligations to protect fundamental rights.
6Instructions of UseOrganizations must ensure that the AI system is accompanied by appropriate, accessible, and comprehensive instructions of use to enable the operation of the AI system transparent for the users.
7Conformity MarkingOrganizations must affix the conformity marking of the AI system to the accompanying documentation or in any other manner, as appropriate, in compliance with the applicable laws and regulations.
8AI System Interaction DisclosureOrganizations must ensure that the natural persons are informed of the fact that they are interacting with an AI system.
9AI System Operation DisclosureIf an organization uses an emotion recognition system or a biometric categorisation system, it must inform the natural persons exposed to these systems about their operation.
10Artificially Generated/ Manipulated Content DisclosureIf an organization generates deep fakes, it must disclose that the content is artificially generated or manipulated.

Consent

1Informed ConsentIf an organization relies on consent as a legal basis to use personal data of data subjects for/ by an AI system, it must obtain informed consent of the data subjects for processing their personal data.
2Right to object/ opt-out of personal data processing in context of AI systemsData subjects must be provided with an opportunity to object to/ opt-out of the processing of personal data for/ by the AI system, including profiling.

Data Subject Rights

1Right to object to/ opt-out of personal data processing in context of AI systemsData subjects must be provided with an opportunity to object to/ opt-out of the processing of personal data for/ by the AI system, including profiling.
2Right to appeal automated decisionData subjects must be provided with an opportunity to appeal any automated decision making and ask for a human review.
3Right to access in context of AI systemsData subjects must be provided with an opportunity to access their personal data being used for/ by AI systems.
4Right to correction in context of AI systemsData subjects must be provided with an opportunity to rectify the inaccurate personal data used for/ by the AI system
5Right to delete in context of AI systemsData subjects must be provided with an opportunity to have their personal data deleted from AI systems and any other database which will be used for/ by an AI system.
6Right to data portability in context of AI systemsData subjects must be provided with an opportunity to receive personal data in a structured and machine-readable format and to transmit the data to another organization.

Security

1Data SecurityOrganizations must protect personal data being used by the AI system through technical measures.
2System SecurityOrganizations must protect the AI system from unauthorized access, manipulation by bad actors through technical measures.
3Internal and Environmental ResilienceOrganizations must ensure safety of the AI system from errors, faults, or inconsistencies within the system or the environment in which it operates.
4Redundancy Back ups and FailsafesOrganizations must ensure robustness of the AI system through technical redundancy solutions, including back up or fail-safe plans.
5Data Poisoning ProtectionOrganizations must ensure to have technical solutions for protecting the AI system against attacks trying to manipulate the training datasets (data poisoning).
6Adversarial Examples ProtectionOrganizations must ensure to have technical solutions for protecting the AI system from attacks involving inputs designed to cause the system to make a mistake (adversarial examples).
7Model Flaws ProtectionOrganizations must ensure to have technical solutions for protecting the AI system from attacks involving inputs designed to exploit model flaws.

Governance

1AI System ClassificationOrganizations are required to classify their AI systems based on their purposes and the level of risk posed by them.
2AI System DocumentationOrganizations are required to draw up and keep up-to-date technical and other important documentation of the AI system.
3AI Logic AuditOrganizations are required to document and monitor the AI system’s logic and factors that it uses to achieve end results.
4AI System Data MappingOrganizations should be able to map the data assets, processes, vendors and third parties involved with the AI system.
5Quality Management SystemOrganizations must put in place and document a quality management system to ensure compliance with applicable laws and regulations.
6AI Risk Register
Organizations must establish, document, and implement a risk management system to evaluate the known and foreseeable risks associated with the AI system and take appropriate mitigation measures.
7Data Governance ControlsOrganizations must ensure that data/personal data being used in AI system adheres to principles of data minimization, purpose specification, and data retention.
8Training Data Controls
Organizations need to be able perform certain operations on the data/personal data being used to train the AI System (bias removal, anonymization).
9AI Output FiltersOrganizations need to be able to monitor output results in real-time to detect any release of personal data in the output results.
10ROPA ReportsOrganizations must be able to audit and demonstrate to regulators the use of assets, data/personal data, processes and vendors used by AI systems.
11Human-Machine Interface/ Oversight ToolsOrganizations must design and develop AI systems with appropriate human-machine interface tools to enable effective human oversight.
12Operational Monitoring SystemOrganizations must be able to actively monitor the operation of the AI system throughout its lifecycle to ensure regulatory compliance.
13Feedback Loop MonitoringOrganizations must be able to monitor feedback loops and take appropriate measures.
14Algorithm Deprecation/ DisgorgementOrganizations must be able to retain versions of the AI system to be able to deprecate/claw back/disgorge the AI algorithm by removing illegal data and the learning obtained from it.
15AI Event LogsOrganizations must keep the event logs for an AI system and must be able to provide access to the regulatory authority to these logs upon request.
16Declaration of ConformityOrganizations must draw up a declaration of conformity for their AI systems to demonstrate compliance with the applicable laws and regulations.
17Registration of AI SystemOrganizations must register their AI systems with the relevant databases as per the requirements of the applicable laws and regulations

Resources

Get in touch

[email protected]
Securiti, Inc.
300 Santana Row
Suite 450
San Jose, CA 95128

Sitemap - XML Sitemap