Sage has announced a new initiative in partnership with PwC, which will redefine how AI is built and adopted in finance, combining transparent, explainable AI with the governance and real-world expertise required to use it with confidence.
The initiative, “Beyond the Black Box”, is backed by new research from Sage, conducted by IDC, showing that more than seventy percent of finance leaders (71%) would reject an AI system if it cannot explain its outputs, even if they are highly accurate, showing that trust, not technology, is holding back AI adoption.
“Finance does not run on answers alone – it runs on answers you can explain,” says Steve Hare, CEO of Sage. “If you cannot show how a number was produced, you cannot use it. That is why we are building AI differently. AI you can trust can’t be a black box, we see it as a glass box that gives finance teams full visibility into how it works, so they can stand behind it with confidence.’
Unlike previous AI initiatives that have focused on large enterprises or purely technical audiences, “Beyond the Black Box” was created with SMB realities at its core. It forms part of Sage’s commitment to helping more SMBs benefit from the transformative impact of AI, building upon the company’s Responsible AI framework and AI Trust Label, reinforcing the belief that trust must be built into AI from the outset.
Trust the biggest barrier to AI adoption
As AI becomes more capable, the ability to explain and stand behind its outputs is emerging as the defining factor in whether it is trusted and adopted in finance.
The consequences are already measurable. Finance professionals are spending an average of 12.9 hours every week reconstructing, validating and defending AI outputs. Much of this work stems from the need to validate and explain outputs that do not clearly show how they were produced. Rather than removing overhead, opaque AI is creating a new category of it.
Sage describes this as the trust cost of AI – the gap between what AI systems promise in theory and what finance teams can actually rely on in practice. At its core, this is a transparency challenge. Every number, recommendation and AI-supported decision must be explainable to auditors, to boards, and to regulators. When it cannot be, adoption stalls.
From black box AI to glass box
Sage has designed its AI from the ground up for the realities of finance, where every output is transparent, explainable and accountable, so organisations can trust and act on it with confidence.
This represents a deliberate shift away from black box AI, where outputs are generated without visibility into how decisions are made, towards what Sage describes as glass box AI: customers can meaningfully interact with AI results – not blind faith. Every answer is explainable, every recommendation is verifiable, and every output can be interrogated.
Through the initiative, Sage and PwC will combine their expertise into practical tools and frameworks to help finance teams understand, assess, and adopt AI responsibly. This includes embedding trust into how AI is implemented in finance environments whilst building on Sage’s existing commitment to SMBs, including the Sage AI Academy.
From pilot to practice
To help move organisations from AI experimentation to trusted, scalable adoption, Sage selected PwC as its lead partner, drawn by PwC’s proven expertise in deploying AI across its own business. PwC has embedded AI into day-to-day workflows at scale, with 86% of its employees actively using AI tools, more than 240 000 Microsoft Copilot licences deployed, and over 4 000 custom GPTs developed and reused across the firm.
“PwC’s role is to build trust as technology reshapes how business decisions are made,” says Marco Amitrano, PwC UK Senior Partner. “This initiative with Sage reflects a shared ambition: to ensure AI innovation is grounded in the quality and transparency expected of market-leading finance systems.”