Over the past decade, advancements in big data and artificial intelligence (AI) have drastically transformed our world, creating many new opportunities, but also giving rise to many challenges, particularly about ethics. This transformation is especially evident in the business and education spheres, but tech advances have certainly made themselves felt in every area of society and industry.
By Professor Yudhvir Seetharam, head of analytics, insights and research for FNB Business
In the business realm, AI and big data analytics have revolutionised everything from customer engagement to supply chain management. However, this data-driven decision-making has meant that companies have found themselves grappling with how to avoid a variety of risks, from the misuse of customer data to perpetuating social biases through automated systems.
Largely as a result of these rapid advancements in technology, the conversation around ethics in business has evolved significantly. Previously, these ethical discussions typically centred around issues like corporate social responsibility, fair trade, and sustainability. However, the proliferation of AI and big data has introduced a new set of ethical quandaries that are reshaping the landscape of business ethics.
It’s an evolution that has been as quick as it has been transformative. In the early years of the big data boom, the primary focus was on harnessing the power of data for competitive advantage. Businesses eagerly adopted data analytics tools to optimise operations, enhance customer experiences, and drive revenue. Ethical considerations at that time were often secondary, largely reactive, and predominantly addressed issues like data security breaches.
As the technology matured, however, so did the ethical questions. For instance, businesses that deployed AI for hiring or lending were faced with the challenge of ensuring that their algorithms did not perpetuate existing social inequalities. Of course, the questions weren’t limited to hiring; the transparency and legitimacy of AI decision-making in general became a significant issue, leading to a demand for much more ‘ethical’ algorithms.
The financial services sector faces similar ethical quandaries. While technology has brought great benefit to areas like fraud detection and algorithmic trading, the sector has come under increased scrutiny about issues like unfair algorithmic biases or treating customers fairly. Given the significant focus of the sector on enabling inclusive financial empowerment, biases like these carry obvious reputational risks.
In the education sector too, AI has shown promise in terms of its potential to personalise learning and streamline administration. However, here too, there are growing concerns around ethics. For example, there have been questions regarding the ethical implications of using predictive algorithms to assess student performance, potentially reinforcing existing inequalities in the educational system. The issue of cheating and plagiarism is another huge question mark hanging over education as students increasingly discover the power of large language models.
In recent years, a number of key global events have compounded these growing ethical concerns, but interestingly, they also served as turning points in the collective ethical consciousness with regard to advancing technology. Arguably the most well-known of these was the Facebook-Cambridge Analytica scandal, which brought data privacy concerns squarely into the global spotlight. While it was certainly not beneficial to the parties involved, the event acted as an ethics catalyst, pushing businesses and regulatory bodies to re-evaluate how consumer data should be handled.
Another ongoing catalyst of ethical thinking around data usage has been the controversial proposed usage of AI in predictive policing, which many fear will perpetuate systemic biases.
These, and many other milestone events, have forced a re-evaluation of ethical frameworks, not just in business, but in society as a whole, and led to substantial public debates. Businesses have come to recognise that, when it comes to data, ethics is not just a compliance issue, it’s a competitive differentiator – albeit one that could be double-sided and, if handled incorrectly, deliver death blows to brand reputation.
Regulatory frameworks like the European Union’s General Data Protection Regulation (GDPR), the US Federal Trade Commission’s Fair Information Practices, the OECD Privacy Framework Basic Principles and, closer to home, the Protection of Personal Information Act (POPIA) have helped to further accelerate the ethical shift. These regulations have made it clear that organisations are accountable for how they collect, store, and use data.
Perhaps as a result of these regulatory advances, in recent years, there has been a move toward institutionalising ethics within organisations. Companies have established dedicated ethics committees, hired ethics officers, and even incorporated ethics into their key performance indicators (KPIs). And ethical considerations are also being integrated into the entire data lifecycle of many organisations, from collection to analysis and utilisation.
It’s clear, then, that our ethical thinking has progressed substantially over the past 10 years, from largely reactive to more proactive and pre-emptive considerations. Given that technological development certainly has no intention of slowing down, ethical concerns are also likely to continue rapidly and exponentially multiplying.
The global community will continue to put increasing pressure on any institution or industry operating in the public domain, and demand that they visibly demonstrate that ethical considerations are not merely optional add-ons to their bottom-line aspirations, but integral to their responsible development and deployment of technology. And irrespective of their financial performance, organisations that are not able to deliver on these ethical expectations may well find themselves having to close their doors.