AI Security & Governance Certification

Course content

Create Account

Log in / Create account to save progress and earn badges

AI Security & Governance Certification
View course details →

5-Step Path to AI Governance

Mark Complete Enroll now to save progress and earn badges. Click to continue.

So, what does the 5-step path to AI Governance look like?

Step 1. Discover and catalog AI models 

This step aims to give businesses a complete and comprehensive overview of their AI usage by identifying and recording details of all AI models used in public clouds, private environments, and third-party apps. It covers the models’ purposes, training data, architecture, inputs, outputs, and interactions, including undocumented or unsanctioned AI models. Creating a centralized catalog of this information enhances transparency, governance, and the effective use of AI, supporting better decisions and risk management. It’s essential for revealing the full range of AI applications and breaking down operational silos within the organization.

Step 2. Assess risks and classify AI Models

This step allows businesses to assess the risks of their AI systems at pre-development and development stage and implement risk mitigation steps. This step also involves leveraging model cards that offer predefined risk evaluations for AI models, including a model’s description, intended use, limitations, and ethical considerations. These risk ratings provide comprehensive details, covering aspects such as toxicity, maliciousness, bias, copyright considerations, hallucination risks, and even model efficiency in terms of energy consumption and inference runtime. Based on these ratings, you can decide which models to sanction for deployment and use, which models to block, and which ones need additional guardrails before consumption.

Step 3. Map and monitor data + AI flows

Data flows into the AI systems for training, tuning and inference and data flows out of AI systems as the output. This step allows businesses to uncover full context around their AI models and AI systems i.e. map AI models and systems to associated data sources and systems, data processing, SaaS applications, potential risks, and compliance obligations. This comprehensive mapping enables privacy, compliance, security and data teams to identify dependencies, pinpoint potential points of failure, and ensure that AI governance is proactive rather than reactive. 

Up to step three involves different levels of visibility into data and AI. Now, you need to implement guardrails to ensure safe data and AI usage.

Step 4. Implement data + AI controls

This step allows the establishment of strict controls for the security and confidentiality of data that is both put into and generated from AI models. Such controls include data security and privacy controls mandated by security frameworks and privacy laws respectively. For example, redaction or anonymization techniques may be applied in order to remove identifiable values from datasets. It ensures the safe ingestion of data into AI models, aligning with enterprise data policies and user entitlements. If sensitive data finds its way into LLM models, securing it becomes extremely difficult. Similarly, if enterprise data is converted into vector forms, securing it becomes more challenging. 

On the data generation and output side, safeguarding AI interactions requires caution against external attacks, malicious internal use, and misconfigurations. To ensure secure conversations with AI assistants, bots, and agents, LLM firewalls should be deployed to filter harmful prompts, retrievals, and responses. These firewalls should be able to defend against various vulnerabilities highlighted in the OWASP Top 10 for LLMs, and in the NIST AI RMF frameworks, including prompt injection attacks and data exfiltration attacks.

Step 5. Comply with regulations

Businesses using AI systems must comply with AI-specific regulations and standards as well as data privacy obligations that relate to the use of AI. To streamline this demanding compliance process, businesses can leverage comprehensive compliance automation tailored to AI. Such a system offers a wide-ranging catalog of global AI regulations and frameworks, including the NIST AI RMF and the EU AI Act, among others. It facilitates the creation of distinct AI projects within its framework, enabling users to identify and apply the necessary controls for each project. This process includes both automated checks and assessments that require input from stakeholders, providing a holistic approach to ensuring compliance.

Enterprises that successfully carry out these five steps, will achieve – 

  • Full transparency into their sanctioned and unsanctioned AI systems,
  • Clear visibility of their AI risks,
  • Mapping of AI and data,
  • Strong automated AI+Data controls, and
  • Compliance with global AI regulations.


Get in touch

[email protected]
Securiti, Inc.
300 Santana Row
Suite 450
San Jose, CA 95128

Sitemap - XML Sitemap