Overview: The Regulatory Response to AI

Course content

Create Account

Log in / Create account to save progress and earn badges

Overview: The Regulatory Response to AI
View course details →

Overview: The Regulatory Response to AI

Mark Complete Enroll now to save progress and earn badges. Click to continue.

With the widespread adoption of AI models and systems in the business and commercial sectors, and the rapid evolution of their capabilities and applications, governments and legislators worldwide are taking swift action to establish regulatory controls on the use of AI. These measures aim to identify, mitigate, and oversee privacy and related risks associated with AI models and systems before they can cause significant harm to individuals. This proactive global response to AI is characterized by a concerted effort to strike a delicate balance between technological innovation, business potential, individual rights, and the broader societal good.

So far, a number of countries have introduced and are in process of finalizing their comprehensive AI laws e.g., the European Union, Brazil, Canada, Japan, Singapore etc. Just like the General Data Protection Regulation (GDPR), the European Union’s AI Act is leading the way of comprehensive AI regulations. Once enacted, the AI laws will require the business developing and deploying different types of AI to comply with a mammoth of compliance obligations.

Some of the upcoming global AI regulations coming across the world (or have already been passed) are:

  • Canada Bill C-27 (AIDA) (under consideration with the Standing Committee on Industry and Technology)
  • New York Local Law No.144 (Law 144) (Enforcement began from 5 July 2023)
  • California Senate Bill 313 (pending with Senate Appropriations Committee for hearing)
  • Brazil Draft AI Law (under consideration)
  • EU AI Act (to be formally adopted by both Parliament and Council)
  • Shanghai AI Regulation (came into effect on 1st October 2022)
  • Biden Executive Order 14110 of October 30, 2023 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

In addition to AI regulations, various regulatory bodies have issued guidelines and compliance frameworks on AI such as the following:

  • NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0)
  •  UK ICO’s AI and Data Protection Risk Toolkit
  •  Singapore Infocomm Media Development Authority AI testing toolkit
  •  Australian NSW AI Assurance Framework 
  •  European Commission guidelines on Ethical Use of Artificial Intelligence in educational settings
  •  French DPA Self-Assessment Guide for AI systems
  •  Spanish DPA Guide on machine learning
  •  China Cyberspace Administration draft policy on Measures on the Management of Generative Artificial Intelligence
  •  India Council of Medical Research Guidelines on the use of AI in biomedical research and healthcare
  •  Vietnam draft National Standard on Artificial Intelligence and Big Data

Since generative AI is the most proliferating type of AI and relies on huge amounts of data for the training and fine-tuning of its models, the businesses dealing in generative AI may also be under an obligation to comply with applicable data protection laws due to the use of personal data within the AI system. For example, in the US, if a company is using its generative AI model as a chatbot in a videogame or other online service directed at children, the company must fulfill certain requirements under the Children’s Online Privacy Protection Act of 1998 in relation to children’s personal data. These requirements include providing direct notice and obtaining affirmative consent from the children’s parents before collecting and using children’s personal data. Similarly, the use of different types of AI for different purposes may be subject to various sectoral laws, guidance issued by the regulatory bodies, etc.

Resources

Get in touch

[email protected]
Securiti, Inc.
300 Santana Row
Suite 450
San Jose, CA 95128

Sitemap - XML Sitemap