The National Institute of Standards and Technology (NIST) too highlights similar issues.
- Evasion Attacks: Involve manipulating inputs to AI systems in subtle ways that cause them to make incorrect decisions, bypassing the intended security measures. Examples would include adding markings to stop signs to make an autonomous vehicle misinterpret them as speed limit signs.
- Poisoning Attacks: Involve corrupting the data AI models learn from, aiming to degrade their performance or functionality. An example would be slipping numerous instances of inappropriate language into conversation records, so that a chatbot interprets these instances as common enough parlance to use in its own customer interactions.
- Privacy Attacks: Target the privacy aspects of AI systems, seeking to infer sensitive information from model outputs or training data. For example, an adversary can ask a chatbot numerous legitimate questions and then use the answers to reverse engineer the model so as to find its weak spots — or guess at its sources.
- Abuse Attacks: Involve the insertion of incorrect information into a source, such as a web page or online document, that an AI then absorbs. Unlike the aforementioned poisoning attacks, abuse attacks attempt to give the AI incorrect pieces of information from a legitimate but compromised source to repurpose the AI system’s intended use.