So, what are the problems or blind spots that lead to unregulated or uncontrolled AI?
Which AI models are currently active in your organization, either deployed directly by your developers (these could be open source or custom models) or provided via your SaaS vendors? If you have no visibility into your AI systems, it may lead to the proliferation of Shadow AI. This proliferation of “shadow AI” threatens security, ethics, and compliance.
What is the Risk Rating of AI Models? Lack of awareness around the model risks may lead to issues like malicious use, toxicity, hallucinatory responses, bias, and discrimination.
Which security controls are enabled for AI Models? Models are similar to data systems with vast, compressed information, yielding outputs trained on valuable data. What security controls can be applied within or around these models so that AI models are not open to manipulation, data leakage, and malicious attacks.
What data is being used in AI Models? Lack of clarity regarding which data is being used in which AI model and which AI pipelines for training, tuning or inference may raise concerns about entitlements and the potential leakage of sensitive data.
What Controls are there on prompts, agents, and assistants? Unguarded prompts, agents, and assistants open the door to harmful interactions, threatening user safety and ethical principles. It’s crucial to understand how the data generated by these models is being utilized – whether it’s being shared in a Slack channel, integrated into a website as a chatbot, disseminated through an API, or embedded in an app. Moreover, these agents, while serving as channels for legitimate queries, also become potential pathways for new types of attacks on AI systems.
Are AI Models compliant with ever-evolving industry standards and privacy regulations, including the NIST (National Institute of Standards and Technology) AI Risk Management Framework (RMF), the EU AI Act, and many other regulations in countries like Canada, China, Brazil, and Singapore?