As artificial intelligence (AI) becomes increasingly integral to business operations, data governance is important to ensure the ethical and effective use of the technology.

Petrus Keyter, data governance consultant, and Willem Conradie, chief technology officer of PBT Group, share their insights on how companies should consider integrating AI into their business processes responsibly.

Most recently, Keyter says he has been involved in foundational work at a start-up client where the focus was on establishing data governance from scratch. This entailed aligning data governance with the company’s strategy of prioritising customer data privacy and ethical data handling.

“Understanding the data they have and where it resides were crucial components of the project,” says Keyter. “This played in the space of metadata management and data quality to make sure it is accurate and that the business is processing high-quality data for their customers.”

He explains that this reflects a broader shift toward business owners playing a more significant role in data quality. This can partly be attributed to the availability of AI tools that simplify the creation of data quality rules. “AI technology allows for general users to input layperson sentences and the data quality tool then suggests the necessary data quality rules. Furthermore, the AI tools can then transform these rules, once approved, into executable actions.”

Conradie explores some of the essentials of AI when it comes to data governance, stressing the accuracy of AI predictions hinges on the quality of training data. “Data must be accurate, relevant, and ethically sourced to ensure AI models perform their intended functions correctly. Every organisation intending to adopt AI needs to put a comprehensive data governance framework in place to manage data effectively, ensuring it is fit for purpose and ethically used.”

Ethical considerations are vital in this regard. Conradie points out how ethics can vary according to companies, individuals, and countries. “Responsible AI involves integrating privacy, security, inclusivity, transparency, and accountability from the outset. AI is not purely a technology. Instead, it is an organisational shift that requires structural adjustments within companies if they are to manage AI responsibly.”

Keyter himself is passionate about data quality, drawing parallels between addressing data quality and software development, emphasising the need to address data issues as close to the source as possible. “Bad data is a personal irritation of mine. I strongly believe that it is critical for the development of business data quality rules and the involvement of business owners in the data quality process.”

Conradie says that there are several key considerations for companies when it comes to AI and data governance.

“The AI model is only going to be as accurate as the data it has been trained on. Feeding inaccurate data will result in inaccurate results. If the company does not contextualise the data the AI model will use when it comes to the role it must fulfil as an output, it will give answers to the wrong questions.

“The data must be relevant to the topic, which amplifies the mission critical importance of data governance. Therefore, processes and a framework must be in place to ensure the company uses the right data at the right time for the right purpose.”

This is where metadata becomes essential as well as data quality in terms of knowing whether the data is fit for purpose and accurate.

Both Conradie and Keyter caution against the ethical pitfalls of AI processing customer data without clear consent, highlighting the need for AI tools to respect customer data processing agreements.

“The local regulatory environment must still catch up with AI. At the moment, AI adoption is faster than any of the previous phases of big disruption in the industry – and currently there is no set of comprehensive legislation to govern the adoption and use of AI and machine learning in the country.

“But that is not to say that business couldn’t still find themselves in hot water with the Information Regulator and other stakeholders if their AI deployment is not compliant with already enforceable data protection regulations such as POPIA and GDPR at a minimum. I anticipate some interesting scenarios where things will go wrong, and regulations will adopt and adapt. Until then, responsible AI deployment comes down to keeping with sound business ethics,” concludes Conradie.