The rush to implement AI leaves organisations struggling to track its use, with the promise of increased productivity leading to overreliance on unproven AI without sufficient trustworthy AI safeguards.
To bring order to the chaos, SAS AI Navigator will soon be available to help AI, data, compliance and risk leaders compile a complete AI inventory and align AI use cases with government regulations and internal policies.
“AI governance is too often thought of as a compliance measure,” says Reggie Townsend, vice-president of SAS AI Ethics, Governance and Social Impact.
“It’s a growth driver. Instead of fears of shadow AI putting the organisation at risk, AI governance empowers people to push the limits of AI within a structured, transparent and secure environment.”
The use of AI agents and LLMs is outpacing trustworthy AI investments, per a study of trust in AI by SAS and IDC. At the same time, Gartner predicts that by 2030 more than 40% of enterprises will experience security or compliance incidents linked to unauthorised shadow AI.
SAS AI Navigator will be available in Q3 2026 on Microsoft Azure Marketplace. It is a Software-as-a-Service (SaaS) solution that enables organisations to inventory and govern AI use cases, which are at the point of business impact.
The governance extends to models and agents that power use cases, as well as the policies applied to them. For instance, companies using chatbots to interact with customers would not only be able to govern the underlying agent or model, such as Claude or Microsoft Copilot, but also apply policies to ensure it’s adhering to regulatory expectations.
Organisations don’t need to change how they build AI; SAS AI Navigator offers a unified view of whatever models and tools they already use, including LLMs, AI agents and open source or SAS models. It supports the full journey from experimentation to deployment through retirement, providing a unified view of all governed assets whether built in-house or purchased from third parties.
SAS AI Navigator also makes it easy for users to apply internal policies and external regulations and frameworks to AI use cases.