Machine learning (ML) and artificial intelligence (AI) are an unstoppable and disruptive force, with pros, cons and reasons for caution, according to KID Group’s expert data scientists.

Wrapping up the year’s developments and looking into trends in 2024, KID Group’s data science team notes that ML and AI have immense potential, but say data quality, ethics and potential impact on people must be considered as businesses harness these technologies.

AI in ethical decision-making

Data scientist Janco van Niekerk acknowledges concerns around the limitations of AI in making ethical decisions, especially in areas like healthcare, criminal justice, and autonomous vehicles.

He says: “Making ethical decisions will become more crucial in the development of AI. When we encounter a moral dilemma while building a machine learning model, we’ll need to establish a moral framework in a clear and mathematical way. This process of defining moral problems concisely may shake up our current ideas about what’s right and show how complicated ethical decision making can be.”

Anel Leenen, managing consultant and data scientist, adds: “It remains challenging to measure how AI will do in these ethical environments – or any environment – because people will change and adapt to them. Certainly no one should be thinking that AI will operate in a vacuum.

“AI will provide support so that potentially better decisions can be made, especially in areas where a limited amount of information can be absorbed by the practitioner. A diversity in data will become paramount so that inherent biases in these areas are not solidified in AI recommendations.”

On instances where even the developers can’t fully explain how the AI arrived at a particular decision, van Niekerk says: “AI safety is becoming a major worldwide concern. The ML industry has made some progress in providing some level of explainability, however I do feel more research has to be continuously done as this issue will become more important.”

AI and stakeholder expectations

With generative AI having boosted AI and ML hype in 2023, the data scientists note that business leaders may have heightened expectations of what the technology can do for them.

Van Niekerk says: “Business leaders typically fall into two categories regarding ML. Some are naïvely optimistic regarding ML and the change it can bring in business, and often believe that ML will provide the business with near perfect accurate predictions. In contrast, there are those who are highly sceptical about ML-algorithms and view the idea of predicting future events as wishful thinking.

“The truth is somewhere in the middle of these two viewpoints: Models are rarely a perfect representation of reality, but the predictions can lead us to make the correct decisions most of the time. If this decision-making process is repeated multiple times we will end up with a net benefit.”

He adds: “ML models rarely provide perfectly accurate predictions. In a business context, these models should not be treated as perfectly accurate but as tools to get one step closer to the truth and solve the specific problem. Model accuracy is also highly dependent on the format and availability of training data. Not all data will be equally useful when trying to predict the outcome of certain events.”

Leenen cites deep learning pioneer Andrew Ng, who noted that the real differentiator between businesses that are successful at AI and those that aren’t is the data used to train the algorithm, how it is gathered and processed, and how it is governed. Ng also highlighted the practice of ‘smartsizing’ data so that a successful AI system can be built using the least amount of data possible.

Leenen says: “ML/AI is not a silver bullet, and implementing AI/ML in your business will not automatically fix data principles, philosophies or strategies that the organisation has fostered over time.”

AI and humans

Analysing the impacts of AI on humans, and how the emerging technologies are likely to change decision-making, Markus Top, data science partnerships and practice manager, says: “Relying solely on AI for decision support can lead to a reduced emphasis on human judgement and intuition. Over time, individuals may become overly dependent on the predictions and recommendations generated by AI models without thoroughly evaluating the context or considering alternative perspectives.”

He cautions: “Depending on AI for decision support might lead to a phenomenon known as inattentional blindness, where individuals may overlook critical information or potential issues because they trust the AI system to catch everything. This can result in a lack of awareness and attentiveness to details.

“Over-reliance on AI can also create a situation where individuals absolve themselves of accountability for decisions, attributing outcomes solely to the recommendations of the AI system. This lack of ownership can hinder the development of a responsible decision-making culture.”

To mitigate these challenges, it’s crucial to promote a culture of collaboration between humans and AI, Top says. “Decision support tools should be viewed as aids to human decision-makers rather than replacements. This requires ongoing training and education to ensure that individuals understand the strengths and limitations of AI and are equipped to make informed decisions in conjunction with AI recommendations.”

On the question of AI displacing jobs, Top says organisations need to engage in proactive workforce planning to identify which tasks can be automated and which skills will be in high demand. Assessing the impact of AI on various job roles helps in designing strategic plans for workforce transformation, he says.

“By championing a human-centric approach to AI integration, businesses can mitigate the negative impacts of job displacement and ensure a workforce that is adaptable and well-prepared for the challenges and opportunities of the future,” Top says.

“A culture of continuous learning should also be fostered that encourages employees to acquire new skills throughout their careers, together with the implementation of mentorship programs and learning pathways to guide employees in their professional development.”

The future of human-AI collaboration

AI should be deployed as a tool to augment human capabilities, rather than replace them, the data scientists say.

Van Niekerk says: “ML-models also have more uses than automating human tasks. For example, models can be built with the explicit aim of reducing uncertainty in a decision-making process by leveraging patterns in a large amount of past data. These tasks are tasks in which humans have not traditionally fared well in – but machines are perfectly suited for.”

Says Top: “Instead of viewing AI as a replacement for human tasks, it should be seen as a powerful assistant, capable of handling repetitive tasks, analysing vast datasets, and providing valuable insights that lead to increased efficiency, innovation, and productivity. AI holds great potential when approached with a mindset of augmentation rather than replacement. By leveraging AI as a tool to complement human abilities, we can create a symbiotic relationship that maximises the strengths of both entities.”