AI Security & Governance Certification

Course content

Create Account

Log in / Create account to save progress and earn badges

AI Security & Governance Certification
View course details →

Application of Existing Laws

Mark Complete Enroll now to save progress and earn badges. Click to continue.

Application of Existing Laws

Data Protection Laws

In most instances, the entities developing and using AI are also subject to the relevant data protection laws since the vast amounts of data ingested by the AI models often contains personal information. 

Some of the common obligations under the data protection laws that may be applicable to entities dealing with AI systems include privacy notice to the consumers, lawful basis of processing personal data, data subject rights, personal data breach notification, cross-border data transfer requirements, etc.

Recently, at multiple instances, the data protection authorities have taken enforcement actions against companies developing and using AI in violation of data protection laws. Following are some of the recent enforcement actions:

  • OpenAI
    In early 2023, the EU member countries made headlines for their regulatory actions against emerging AI technologies. Italy became the first European country to temporarily ban the use of ChatGPT after its data protection authority, Garante, raised serious suspicions about ChatGPT’s collection, use, and maintenance of users’ personal data. The ban led other regulatory bodies in EU countries, such as France and Spain, to review the use of the famous chatbot in their own jurisdictions. Finally, in April 2023, the European Data Protection Board set up a taskforce dedicated to cooperation and exchange of information on possible enforcement actions by various data protection agencies across the EU. 
  • Clearview AI
    Clearview AI, a US company which developed an AI facial recognition algorithm based on photos scrapped from social media websites, was fined almost $8 million by the UK’s Information Commissioner’s Office for collecting personal data from the internet without obtaining consent of the data subjects. Similarly, the Italian data protection authority fined the company $21 million for committing breach of data protection rules. The authorities in Australia, Canada, France, and Germany have also taken similar enforcement actions against the company. 

    In the United States, through a lawsuit brought by the American Civil Liberties Union (ACLU) under the Illinois’s Biometric Information Privacy Act (BIPA), Clearview AI consented to stop selling its AI facial recognition algorithm system in the United States to most businesses and private firms across the U.S. The company also agreed to stop offering free trial accounts to individual police officers, which had allowed them to run searches outside of police departments’ purview.

  • Replika AI
    The Italian data protection authority banned the Replika app, an AI chatbot developed by Luka Inc., from processing personal data of the Italian users. The company was also issued a warning to face a fine of up to 20 million euros or 4% of the annual gross revenue in case of non-compliance with the ban. The reasons for the ban cited by the regulatory authority included concrete risks for minors, lack of transparency, and unlawful processing of personal data.

Sectoral Laws

Although it is important to keep abreast of the AI-specific laws and regulations, the businesses must also ensure compliance with the applicable requirements under existing legal frameworks. Businesses may be subject to various sector-specific laws and regulations while developing and deploying AI and AI-enabled technologies.

For example, a business operating in the US and developing/ deploying AI solutions must ensure the AI solution is not unfair or deceptive to the consumers within the meaning of Section 5 of the Federal Trade Commission Act (FTC Act). As identified above while discussing the US’s AI regulatory approach, the FTC has released specific guidelines for businesses on how their development and use of AI can be unfair and deceptive under the FTC Act. Likewise, businesses operating in specific sectors e.g., healthcare, financial services, education, etc. might be subject to obligations under different sector-specific laws and regulations both at federal and state levels.

In another example, in the AIDA Companion document, the Government of Canada identified a number of existing laws that apply to the use of AI in Canada, including the following:

  • The Canada Consumer Product Safety Act
  • The Food and Drugs Act
  • The Motor Vehicle Safety Act
  • The Bank Act
  • The Canadian Human Rights Act and provincial human rights laws
  • The Criminal Code

Therefore, while it is imperative for the businesses to remain vigilant and up-to-date on the new AI laws and regulations, they must also maintain compliance with existing legal frameworks while developing and using AI.

Resources

Get in touch

[email protected]
Securiti, Inc.
300 Santana Row
Suite 450
San Jose, CA 95128

Sitemap - XML Sitemap