AI Security & Governance Certification

Course content

Create Account

Log in / Create account to save progress and earn badges

AI Security & Governance Certification
View course details →

Attacks Against Generative AI Systems – NIST Trustworthy & Responsible AI

Mark Complete Enroll now to save progress and earn badges. Click to continue.

The National Institute of Standards and Technology (NIST) too highlights similar issues.

  • Evasion Attacks: Involve manipulating inputs to AI systems in subtle ways that cause them to make incorrect decisions, bypassing the intended security measures. Examples would include adding markings to stop signs to make an autonomous vehicle misinterpret them as speed limit signs.
  • Poisoning Attacks: Involve corrupting the data AI models learn from, aiming to degrade their performance or functionality. An example would be slipping numerous instances of inappropriate language into conversation records, so that a chatbot interprets these instances as common enough parlance to use in its own customer interactions.
  • Privacy Attacks: Target the privacy aspects of AI systems, seeking to infer sensitive information from model outputs or training data. For example, an adversary can ask a chatbot numerous legitimate questions and then use the answers to reverse engineer the model so as to find its weak spots — or guess at its sources.
  • Abuse Attacks: Involve the insertion of incorrect information into a source, such as a web page or online document, that an AI then absorbs. Unlike the aforementioned poisoning attacks, abuse attacks attempt to give the AI incorrect pieces of information from a legitimate but compromised source to repurpose the AI system’s intended use.

XML Sitemap

Gartner Customers Choice Gartner Cool Vendor Award Forrester Badge IDC Worldwide Leader Gigaom Badge RSAC Leader CBInsights Forbes Security Forbes Machine Learning G2 Users Most Likely To Recommend IAPP Innovation award 2020