Let us look at what an AI governance program is made up of? There are four building blocks:
Models are proliferating at a rapid pace, reflecting the dynamic and ever-expanding nature of the field. The problem is that when these show up in your AI framework like Google Cloud’s Vertex Model Garden or AWS Bedrock, savvy developers will start using some of them, with or without approvals.
Model catalog’s purpose is to track which models are in use, their version numbers and approval status. For the approved models, the catalog should also show what data sets were used to train the models, how the models were evaluated and their fairness scoring.
Also, you may be required to delete or disgorge an algorithm in case it is built in contravention of applicable laws or if the model was trained on personal data obtained improperly. In such cases, versioning of the algorithms can save you from deleting the whole model and instead only delete the part developed on the ill-gotten data.
The focus for this section of AI governance shifts to the model consumption aspects, especially a mapping of business use cases to the approved models and risk identification. Catalog users should be able to see the model’s purpose and the business owner. It tracks the entire model lifecycle with steps like approvals from the legal, CISO, CDO, auditors etc. all the way to model retirement.
Once the approved models are deployed, they need to have a mechanism to defend from adversarial attacks as well as capture continuous operational telemetry. The risk areas mentioned above need to be monitored constantly for unexplained changes and anomalies. Upon detection of aberrations, alerts and notifications should be raised intelligently, without causing alert fatigue.
One of the biggest issues with AI models is their non deterministic responses can lead to hallucinations. Hence, monitoring for accuracy and relevance is highly critical. As more AI models are put into production in 2024, a new role of AI governance will be to track their performance and cost. We expect that new chips from NVIDIA, ARM, Intel and AMD will lower the cost of AI model inference, but this cost will still be higher than traditional analytics.
Although, data security and privacy runs through every section of AI governance, monitoring of users, their entitlement and security policy is an important component.
Dashboards and workflow management are used to assess the health of AI applications and to initiate the necessary remedial actions to respond to the alerts. The first step is to triage the issue and perform root cause analysis. This section of AI governance should integrate into ticketing systems like Jira and ServiceNow. It should provide an incident management capability to document steps taken to resolve incidents.
Finally, workflows should allow assessments to adhere to relevant AI regulations, like the NIST AI Risk Management Framework.