Auditable AI provides the documentation and records necessary to pass a regulatory review, writes Dr Scott Zoldi chief analytic officer at FICO.
With novel artificial intelligence (AI) applications multiplying like rabbits these days, it may seem like the current wave of AI innovation is all beer and skittles. Lawsuits have a way of sobering up any metaphorical party and, in the wake of numerous high-profile racial bias and fairness cases, The Wall Street Journal reports that companies including Google, Twitter and Salesforce say they “plan to bulk up ethics teams responsible for evaluating the behavior of algorithms.”
That’s a positive step in the right direction, but far from enough. In today’s litigious environment, AI-powered business decisions must be more than explainable, ethical and responsible; we need Auditable AI.
Can Your AI Pass Muster with Regulators?
As the mainstream business world moves from the theoretical use of AI to production-scale decisioning, Auditable AI is essential because it encompasses more than the tenets of Responsible AI (AI that is robust, explainable, ethical and efficient).
Auditable AI also provides the documentation and records necessary to pass a regulatory review, which could be expected to include questions like:
* What data was used to build the machine learning model? Is the data representative of a production environment? How were data biases addressed if/when they were discovered in the development phase?
* What are the derived variables used in the model? Are they biased? Are the variables approved for use in the model by a governance team?
* Which specific machine learning algorithms were leveraged? Are they appropriate for the data and problem being solved?
* Is the model fully explainable, with accurate reason codes that explain automated decisions the model makes, explainable to both model user and the impacted party?
* Was the model designed to be interpretable? What are the latent features that drive the outcome? Are they tested for bias?
* What stability testing was done on the model, to understand and remediate shifts in production data?
* Are there specific monitoring requirements that ensure data drift, performance drift and ethical treatment drift are monitored?
* What humble AI stop-gaps are in place to step down to a safer model when production customer data shifts from what the model was trained on?
* How is the model aware of, and how does it respond to, adversarial AI attack?
Why Auditability Matters
It’s important to note that although the word “audit” has an after-the-fact connotation, Auditable AI emphasizes laying down (and using) a clearly prescribed record of work while the model is being built and before the model is put into production.
Auditable AI makes Responsible AI real by creating an audit trail of a company’s documented development governance standard during the production of the model. This avoids haphazard, after-the-fact probing after model development is complete. There are additional benefits; by understanding precisely when a model goes off the rails as early as possible, to fail fast, companies can save themselves untold agony, avoiding the reputational damage and lawsuits that occur when AI goes bad outside the data science lab.
Auditable AI Can Help Prevent Legal Challenges
In my 2020 AI predictions blog I foresaw the rise of AI advocacy groups. In the absence of full-fledged AI regulation, advocacy groups play a powerful role, finding a never-ending stream of bias at which to target their efforts; a quick search on “AI advocacy groups” turned up nearly 60 000 news articles.
Legal costs, damaged reputations and customer dissatisfaction are just a few of the heavy costs of coming under AI advocacy groups’ scrutiny–and Auditable AI can help to prevent all of them. Adopting Auditable AI will ensure that a company’s AI standards are followed and enforced, by recording key decisions and outcomes throughout the model development process.
Although it is no small task to establish the precise information that must be measured, reviewed and approved, doing so will give companies two invaluable advantages:
* They can persist model development information in perpetuity for later review and audit (especially important due to data science staff turnover).
* Enable model builds to proceed with confidence because they adhere to the company’s documented standard and “guard rails” to prevent deviations from it.
Steps toward Building Auditable AI
Without a firm model development standard and guideline, it is difficult for companies to produce the audit report that tracks compliance consistently, as well as the key data that will be used to ensure that models brought into production are fair, unbiased and safe.
Many companies suffer from many data science religions–individual groups or, worse, renegade scientists who march to the beat of their own philosophical drum. In some cases, critical pieces of the model governance are simply, and disturbingly, not addressed. Moving from research mode to production mode requires that data scientists and companies have a firm standard in place.
Since I, along with authors at Harvard Business Review, think that innovation should be driven by the Highlander Principal (“There can only be one.”), here are the questions your organisation needs to ask in developing Auditable AI:
* How is the analytic organisation structured today? Is there one leader, or a matrix with multiple leaders? If the latter, how well do they, or will they, coordinate with one another?
* How is the existing governance committee of analytic leaders structured? How will decisions be made as to what constitutes acceptability in AI algorithms and the company’s philosophy around the use of AI? How will the standard be documented?
* How is Responsible AI being addressed? Is there an active monitoring program in place? How does it operate? Are immutable blockchain technologies being used to persist a system of record of how the standard was met for each model, a system that persists beyond individual data scientists and organizational shifts?
* What is the state of the data ethics program and data usage policies? How is synthetic data tested and used? What sort of stability testing is being done to make sure models will operate effectively in shifting production environments?
* What are the AI development standards? Which tools and assets are made available? Which algorithms are permissible, and which aren’t? Increasingly, from a model ops perspective, Auditable AI calls for standard tools and standard variable libraries. How are those choices made and maintained? Are there common codebases, and daily regression tests? How are learned latent features extracted and tested for suitability, stability and bias?
* How is the company achieving Ethical AI? Which AI technologies are allowed for use in the organization, and how will they be tested to ensure their appropriateness for the market? Is there monitoring in place today for each model and, if so, what’s being monitored? Ideally this includes data drift, performance drift and ethical treatment drift. What are the thresholds pre-set to indicate when a model should no longer be used?
* What is the company’s philosophy around AI research? Does the company drive to demonstrate that it’s inventive, indicating a tolerance for higher risk in its AI investments? Or is the company more conservative, wanting to ensure it is using proven technology that is regulated and easily monitored? Or will it take a hybrid approach, with one team tasked with demonstrating the potential art of the possible with AI, and another that will operationalise it?
* Is the company uniformly ethical with its AI? Is it placing some models under the Responsible AI umbrella due to being regulated and therefore high risk, while others are simply not built to the Responsible AI standard? How are those dividing lines set? Is it ever OK to not to be responsible in the development of AI? If so, when?
Granted, there are myriad questions to be answered, and achieving Auditable AI can seem daunting. But there are already best-practices frameworks and approaches that can be readily adopted, providing critical building blocks. With the majority of organizations today deploying AI into a void – one fraught with risk – there is a true urgency to operationalize Auditable AI. The future of AI, and the business world as we know it, depends on this powerful technology being managed and monitored in an equally powerful way.