The pace of change in today’s digital world has placed immense pressure on institutions to modernise, and central banks are no exception.
By Ntsako Baloyi, data and AI lead for Accenture, Africa
As the custodians of financial stability, these institutions are facing new demands that extend beyond traditional monetary policy and financial oversight. They now need to be agile, data-driven, and technologically forward-looking to remain relevant in a world increasingly shaped by artificial intelligence.
Within this context, creating future-ready IT organisations powered by AI is not only prudent – it’s essential.
Central banks operate in a uniquely complex environment, balancing public trust with regulatory rigour and technical precision. The introduction of AI, particularly generative AI, opens new possibilities for operational efficiency, fraud detection, data analysis, and regulatory compliance.
Yet these same technologies also present novel risks – ranging from algorithmic bias to data privacy concerns and the threat of misuse. To harness the full value of AI while managing its inherent risks, central banks must adopt a holistic, responsible, and practical approach to AI integration.
A foundational shift is required. IT organisations within central banks must evolve from traditional support functions into enablers of continuous reinvention. This means embedding AI into their core architecture – not as a bolt-on tool, but as a strategic capability. To do this effectively, central banks must understand where and how AI is being deployed within their operations. Maintaining a detailed inventory of AI use cases, models, and systems is a first step toward transparency, accountability, and governance.
From a technical perspective, the lifecycle of data and AI systems is more complex than ever. Risks now span the entire data supply chain – from ingestion and curation to modelling, deployment, and monitoring. Each step presents unique vulnerabilities: biased or poor-quality data, prompt toxicity in generative systems, and drift in model performance over time. These are not isolated issues – they are systemic and require an integrated response.
Accenture’s responsible AI approach highlights seven key pillars that central banks can adopt to embed ethics, safety, and transparency into their AI strategies. These include fairness, explainability, accountability, data privacy, and sustainability. But having principles is not enough – central banks must implement them through tangible tools, processes, and controls. This involves designing for traceability, ensuring data lineage is clear, and setting up rigorous testing and validation mechanisms before models are deployed. Ongoing monitoring is essential, especially in high-risk applications.
Another important consideration is choosing the right foundation model and adaptation approach. The market for generative AI models is rapidly expanding, with options ranging from open-source platforms like LLaMA and BLOOM to commercial offerings like GPT-4 and Claude 2.
Central banks need to assess these options not just in terms of performance, but also with respect to data sovereignty, cost, contextual fit, and security. Some models can be used “as is,” while others allow for fine-tuning or full customisation depending on the required level of control and strategic differentiation.
The choice of model ties directly into the broader question of enterprise readiness. Prompt engineering may deliver short-term results with minimal complexity, but fine-tuning or pre-training models using proprietary data can unlock deeper insight and long-term value – albeit with greater technical and financial investment. It’s crucial that central banks align their AI strategy with their core digital infrastructure, ensuring it is scalable, secure, and fit for purpose.
Equally important is the human side of transformation. AI is not a plug-and-play solution – it requires new ways of working and fresh talent strategies. Central banks must invest in upskilling their workforce, building internal capabilities in data science, machine learning, and AI governance. This includes preparing leadership to understand the implications of AI at both strategic and operational levels. Having a single accountable leader for AI can help drive coherence and consistency across the organisation.
Regulatory compliance is another cornerstone of responsible AI adoption. The evolving landscape – particularly with new frameworks like the EU AI Act – means that central banks must stay ahead of changing requirements. They need risk taxonomies that reflect the nature of AI systems, processes to validate model outputs, and policies that align with international best practice. Importantly, these controls should not stifle innovation, but rather enable it within clearly defined ethical and legal boundaries.
South African financial institutions are already exploring how to modernise their systems and harness emerging technologies. However, for AI to be truly transformative, adoption must be deliberate and responsible. The opportunity lies not just in digitising what already exists, but in reimagining how institutions operate, how they serve the public, and how they collaborate with ecosystems of technology providers, fintechs, and regulatory bodies.
As we move deeper into the era of generative AI, the need for future-ready IT organisations will only grow more urgent. For central banks, the path forward is not about chasing trends, but about building resilient, ethical, and intelligent systems that can adapt to whatever comes next. By grounding their AI strategies in responsibility and operational excellence, they can unlock new value and fulfil their evolving public mandate with confidence.