As artificial intelligence (AI) moves from experimentation to everyday business use, South African organisations are discovering that success depends far less on sophisticated algorithms than on skills, oversight and operational discipline.

While AI tools have become increasingly accessible, many companies are now facing a more practical challenge: how to integrate these systems into real workflows while maintaining control, accountability and reliability.

“AI skills are often misunderstood,” says Sasha Slankamenac, office of the chief technology officer: AI practice lead at Dariel. “They’re not limited to building models or writing complex algorithms. They include data literacy, the ability to ask better questions, the skill to use AI tools effectively, and the judgement to test whether an output is useful.”

According to Slankamenac, the organisations seeing real value from AI are typically those that approach the technology as an operational capability rather than a standalone innovation project.

 

AI adoption is happening in practical areas first

Across South Africa, businesses are introducing AI into operational processes where the benefits are immediately visible.

These applications include fraud detection, credit and risk analysis, customer support automation, document processing, internal knowledge search, forecasting, recommendation systems, coding assistance and decision-support tools.

“Executives often experience AI less as a category of technology and more as tools being inserted into workflows to reduce effort, improve speed, or strengthen decision-making,” Slankamenac explains.

Many of these systems combine predictive analytics with newer generative AI capabilities, but the underlying objective remains the same: improving efficiency and decision quality.

 

Businesses need operational skills, not just AI scientists

Despite the attention given to AI specialists, Slankamenac says that most companies do not need large teams of machine learning researchers.

“What organisations actually need are people who can clean up data, connect systems, reshape workflows, monitor outputs and step in when a tool gets something wrong,” he says.

This combination of technical and operational capability typically includes skills in data engineering, software integration, compliance oversight and business process design.

“The companies extracting real value from AI are rarely the ones showing off the flashiest demos,” Slankamenac notes. “They’re the ones that can make the technology behave inside the messiness of day-to-day operations.”

This reflects a broader trend in enterprise technology, where the challenge lies less in developing models and more in deploying them reliably across complex business environments.

 

Industry expertise remains essential

Slankamenac emphasises that AI systems cannot replace domain expertise.

“A model can generate an answer, but it doesn’t understand the commercial, legal or operational consequences of being wrong unless people build that context into the system,” he says.

For example, healthcare AI tools still require clinical judgement, while financial decision systems must operate within strict regulatory frameworks. In these cases, experienced professionals remain essential to interpret and challenge machine-generated outputs.

“AI works best when it’s used by people who understand the domain deeply enough to question it,” he adds.

 

Guardrails must be built into AI systems

As adoption grows, organisations are increasingly focused on governance and risk management.

Slankamenac says effective guardrails must be designed into AI systems from the start.

“A business should define what the AI is allowed to do, what data it can access, what its outputs should look like and where human review is mandatory,” he explains.

These safeguards are then supported by technical controls such as logging, testing, approval workflows, access management and ongoing monitoring.

“Guardrails are not just about blocking bad answers,” Slankamenac says. “They’re what turn a clever model into something a business can trust to use repeatedly.”

 

Leadership must develop AI literacy

For managers and executives, the rise of AI is creating new leadership responsibilities.

“Leaders don’t need to become machine learning engineers,” says Slankamenac. “But they do need enough AI literacy to stop being fooled by polished outputs.”

This includes understanding where AI adds value, where it introduces risk, and how work itself may need to change around automated systems.

One of the defining management skills of the coming years, he suggests, will be distinguishing between outputs that sound convincing and those that are genuinely reliable.

 

Oversight systems must evolve

Traditional management approaches also need to adapt to the speed and scale of AI-driven workflows.

Oversight is increasingly shifting towards continuous monitoring systems that include dashboards, audit trails, exception reporting and automated quality checks embedded directly into processes.

“The focus of oversight is changing,” Slankamenac says. “Less attention on watching activity, and more attention on watching the quality and consequences of AI-assisted decisions.”

 

Data remains the foundation

Despite advances in AI models, data quality remains one of the biggest constraints on effective deployment.

“Data gives AI its shape,” Slankamenac explains. “It determines what the system learns, how reliable its outputs are and whether those outputs are useful in practice.”

If data is incomplete, biased or poorly governed, the AI will reflect those weaknesses.

“In many organisations, the real constraint isn’t access to powerful models,” he says. “It’s the state of the data underneath them.”

 

Managing the risks of AI

AI systems also introduce new operational and ethical risks.

These include incorrect outputs, biased recommendations, privacy breaches, security vulnerabilities and misplaced confidence in machine-generated decisions.

“One of the biggest dangers is scale,” Slankamenac notes. “A weak human decision affects one case at a time. A weak AI-supported process can repeat the same bad judgement across thousands.”

To counter these risks, organisations must treat AI as a managed operational capability with defined controls and governance structures.

 

Accountability remains with the organisation

Despite their sophistication, AI systems cannot carry responsibility for decisions.

“Responsibility always stays with the organisation and the people who chose the system, built it into a process and acted on its outputs,” Slankamenac says.

“AI is a powerful tool, but accountability can’t be outsourced to a machine.”

 

The safest path to adoption

For many organisations, the safest AI use cases are those that support employees rather than replace them.

Applications such as drafting, summarising, transcription, coding support, document classification and workflow automation can improve productivity while still allowing human review.

“The risk increases sharply when AI begins making high-stakes decisions about money, employment, healthcare or legal matters without strong oversight,” Slankamenac says.

“Used responsibly, AI becomes a powerful partner for human judgement. But it works best when organisations focus on skills, governance and disciplined execution, not just the technology itself.”