Enthusiasm for self-driving cars has waned and automakers are rethinking or exiting their robo-taxi plans.
By Scott Zoldi, chief analytics officer at FICO
This is just one sign that we are in the middle of the Great Correction in AI — a period when wild ambitions and moon-shot ideas are being replaced by more realistic approaches to artificial intelligence and its attendant machine learning (ML) models, algorithms, and neural networks. I’m calling this the new pragmatism of Practical Artificial Intelligence, and I predict this technology will rise in 2023 like a phoenix from the ashes of years of irrational exuberance around artificial intelligence.
Under the umbrella of practicality, companies will strategically rethink how they use artificial intelligence, an attitudinal shift that will filter down to implementation, AI and machine learning model management, and governance. Here are my four predictions for Practical artificial intelligence in 2023:
Novelty applications will be out, and practical applications will be in
Generative AI — in which algorithms create synthetic data —has been a big buzzword lately, with slick image-generation capabilities grabbing headlines. But the reality is, Generative AI isn’t a new technology; my data science organization at FICO has been using it for several years in a practical way to generate synthetic data and to do scenario testing as part of a robust AI model development process.
Here’s an example of why we need to focus more on practical uses of Generative AI: Open banking represents a huge revolution in credit evaluation, particularly for the underserved. However, as this new financial channel takes off, collecting a corpus of data to build real-time, customer-aware analytics is lacking. Generative AI can be applied practically to produce realistic, relevant transaction data for developing real-time credit risk decisioning models.
Artificial intelligence and machine learning development processes will become productionalised
Practical AI is incompatible with the modus operandi that many data science teams fall into:
- Build bespoke AI models, experimenting with new algorithms to maximize performance on-sample
- Spend inadequate time focused on whether these bespoke models and algorithms will generalize out of sample
- Put the bespoke model into production without knowing with certainty the consequences
- Be faced with clawing back the model, or worse, letting it run with unforeseen and/or unmonitored consequences.
To achieve production-quality artificial intelligence, the development processes themselves will need to be stable, reliable, and productionalized. This comes back to model development governance, frameworks for which will increasingly be provided and facilitated by new artificial intelligence and machine learning platforms now entering the market. These platforms will set standards, provide tools and define application programming interfaces (APIs) of properly productionalized analytic models, and deliver built-in capabilities to monitor and support them.
AI governance is a major focus of my work, and I predict that in 2023 we will see artificial intelligence platforms and tools increasingly become the norm for facilitating in-house Responsible AI development and deployments, providing the necessary standards and monitoring. As a corollary, the Kaggle approach to model development – extracting the highest predictive power from a model, at all costs – will similarly give way to a new Practical AI sensibility coupled with business focus: what’s the best 95% solution? The reality is, 95% is likely sufficient for most AI applications, and in many ways preferred when we put the model performance into a larger context of:
- Model interpretability
- Ethical AI
- Environmental, social, and corporate governance (ESG) considerations
- Simplicity of monitoring
- Ease of meeting regulatory requirements
- Time to market
- Excessive cost and risk in complex AI applications.
Proper model package definition will improve the operational benefits of AI
Productionalising AI includes directly codifying, during the model creation process, how and what to monitor in the model once it’s deployed. Setting an expectation that no model is properly built until the complete monitoring process is specified will produce many downstream benefits, not the least of which is smoother artificial intelligence operations:
- AI platforms will consume these enhanced model packages and reduce model management struggles. We will see improvement in model monitoring, bias detection, interpretability, and real-time model issue reporting.
- Interpretability provided by these model packages will yield machine-learning models that are transparent and defensible.
- Rank distillation methods will ensure that model score distribution and behaviour detection are similar from model update to model update. This will allow updates to be integrated more smoothly into the existing rules and strategies of the larger artificial intelligence system.
There will be a handful of enterprise-class AI cloud services
Clearly, not every company that wants to safely deploy AI has the resources to do so. The software and tools required can simply be too complex or too costly to pull together in piece-parts. As a result, only about a quarter of companies have AI systems in widespread production. To solve this challenge and address a gigantic market opportunity, I predict that 2023 will see the emergence of a handful of enterprise-class AI cloud services.
Just as Amazon, Google, and Microsoft Azure are the “Big Three” of cloud computing services, a few top AI cloud service providers will emerge to offer end-to-end AI and machine learning development, deployment, and monitoring capabilities. Readily accessible via API connectivity, these professional AI software offerings will allow companies to develop, execute and monitor their models and algorithms, while also demonstrating proper AI governance. These same cloud AI platforms could also recommend when to drop down to a simpler model (Humble AI) to maintain trust in decision integrity.
Where Practical AI Lives: The Corpus AI
Over the past five years or so I’ve been evangelizing the need for Responsible AI practices, which guide us on how to properly use data science tools to build AI decisioning systems that are explainable, ethical, and auditable. These principles are at the heart of an organization’s metaphorical analytic body. But they are not enough. This analytic body, which I call the Corpus AI, is where Responsible AI and Practical AI must be supported by the equivalents of a biological circulatory system, skeletal system, connective tissue, and more.
Looking ahead to 2023, learning to cope with the ever-evolving market pressures will remain the new normal. I believe my AI predictions will allow the Corpus AI to strengthen and flourish during, and far beyond, the Great Correction – in a mature, standardized, auditable, and regulation-ready way.
In the meantime, practical applications of AI will achieve great things. In November, FICO won the Machine Learning in Credit and Collections Award at the 2022 Credit & Collections Technology Awards, held in London in late November. The award was for an advanced scam detection model, which is available to lenders using the Retail Banking model in FICO Falcon Fraud Manager, the world’s leading payments fraud protection solution. My team, which bult the model, are very proud of our work with lenders to stop scams.