This step allows businesses to gain a comprehensive understanding of all AI models in use across public clouds and SaaS applications within the organizations as well as identify their intended purposes and characteristics. This step allows businesses to document their AI models, how they are trained, inputs and outputs and the interaction of AI models with other systems. Idea is to break down walls and shine a light on every corner of your AI landscape including shadow AI.
This step allows businesses to assess the risks of their AI systems at pre-development, development and post-development phases and document mitigations to the risks ultimately classifying AI models depending on their risk levels. Various agencies offer out-of-the-box, self-reported ratings for popular open source and commercial AI models. These ratings provide comprehensive details, covering aspects such as toxicity, maliciousness, bias, copyright considerations, hallucination risks, and even model efficiency in terms of energy consumption and inference runtime. It is this stage where AI systems and models are classified based on their characteristics.
Data flows into the AI systems for training, tuning and inference and data flows out of AI systems as the output. This step allows businesses to uncover full context around their AI models and AI systems i.e. map AI models and systems to associated data sources and systems, data processing paths, SaaS vendors or applications, potential risks such as sensitive information leakage, and compliance obligations. This comprehensive mapping enables privacy, compliance, security and data teams to identify dependencies, pinpoint potential points of failure, and ensure that AI governance is proactive rather than reactive. Up to step three involves different levels of visibility into data and AI. Now, you need to implement guardrails to ensure safe data and AI usage.
This step allows the establishment of strict controls for the security and confidentiality of data that is both put into and generated from AI models. Such controls include data security and privacy controls mandated by security frameworks and privacy laws respectively. For example, anonymization techniques may be applied in order to remove identifiable values from datasets. It ensures the safe ingestion of data into AI models, aligning with enterprise data policies and user entitlements. On the output side, safeguarding AI interactions requires caution against external attacks, malicious internal use, and misconfigurations. To ensure secure conversations with AI assistants, bots, and agents, LLM firewalls should be deployed to filter harmful prompts, retrievals, and responses.
AI systems may involve the use of personal data and therefore, are subject to data protection laws. It is essential for businesses to identify applicable data privacy laws depending on the jurisdiction they are based in, geographical location where they order its services and the residencies of data subjects. Once applicable data privacy laws are identified, readiness assessments must be conducted in order to be able demonstrate compliance with the applicable privacy laws. Besides, businesses are able to demonstrate compliance with regulatory requirements specific to AI systems and implement industry standards such as the NIST AI Risk Management Framework.
Enterprises that successfully carry out these five steps and implement sound AI governance practices will spearhead comprehensive transformation that translates to business value and gain full visibility into their AI systems.