Chatbots, autonomous vehicles, and connected machines in digital factories foreshadow what the future will look like: the widespread implementation of artificial intelligence (AI) applications brings many advantages for businesses such as increased efficiencies, fewer repetitive tasks and better customer experiences.
However, in the wrong hands, the potential threats could easily counterbalance the huge benefits.
Vulnerability to malicious cyber-attacks or technical failure will increase, as will the potential for larger-scale disruptions and extraordinary financial losses as societies and economies become increasingly interconnected.
Companies will also face new liability scenarios as responsibility for decision-making shifts from human to machine and manufacturer.
In the new report “The Rise of Artificial Intelligence: Future Outlook and Emerging Risks”, insurer Allianz Global Corporate & Specialty (AGCS) identifies both the benefits and emerging risk concerns around the growing implementation of AI in society and industry, including in the insurance sector. AI, also referred to as machine learning, is essentially software that is able to think and learn like a human.
“AI comes with potential benefits and risks in many areas: economic, political, mobility, healthcare, defense and the environment. Active risk management strategies will be needed to maximise the net benefits of a full introduction of advanced AI applications into society,” says Michael Bruch, head of emerging trends at AGCS.
Today, “weak” or basic forms of AI are able to perform specific tasks, but future generations of so-called “strong” AI applications will be capable of solving difficult problems and execute complex transactions. AI is beginning to find users in almost every industry, from chatbots which offer financial advice to helping doctors to diagnose cancer. The technology is used to power driverless cars better predict the weather, process financial transfers or to monitor and operate industrial machines. According to Accenture, AI could double the annual economic growth rate in 12 developed economies by 2035.
But with these potential benefits come risks. Cyber risks, which are one of the biggest risks for businesses according to the Allianz Risk Barometer 2018, illustrate the two different faces of new technologies such as AI: AI-powered software could help to reduce cyber risk for companies by better detecting attacks, but could also increase it if malicious hackers are able to take control of systems, machines or vehicles.
AI could enable more serious and more targeted cyber incidents to occur by lowering the cost of devising attacks. The same hacker attack – or programming error – could be replicated on numerous machines. It is already estimated that a major global cyber-attack has the potential to trigger losses in excess of $50-billion but even a half-day outage at a cloud service provider has the potential to generate losses around $850-million.
Emerging AI risks in five areas
To identify emerging AI risks AGCS has focused on five areas of concerns, namely software accessibility, safety, accountability, liability and ethics.
“By addressing each of these areas, responsible development and introduction of AI becomes less hazardous for society. Preventive measures that reduce risks from unintended consequences are essential,” Bruch says.
In terms of safety, for example, the race for bringing AI systems to the market could lead to insufficient or negligent validation activities, which are necessary to guarantee the deployment of safe, functional and cyber-secure AI agents. This, in turn, could lead to an increase in defective products and recalls.
With regard to liability, AI agents may take over many decisions from humans in future, but they cannot legally be held liable for those decisions.
In general, the manufacturer or software programmer of AI agents is liable for defects that cause damages to users. However, AI decisions that are not directly related to design or manufacturing, but are taken by an AI agent because of its interpretation of reality, would have no explicit liable party, according to current law.
“Leaving the decisions to courts may be expensive and inefficient if the number of AI-generated damages start increasing,” Bruch says. “A solution to the lack of legal lability would be to establish expert agencies or authorities to develop a liability framework under which designers, manufacturers or sellers of AI products would be subject to limited tort liability.”
Meanwhile, insurers will have a crucial role to play in helping to minimise, manage and transfer emerging risks from AI applications.
Traditional coverages will need to be adapted to protect consumers and businesses alike. Insurance will need to better address certain exposures to businesses such as cyber-attacks, business interruption, product recall and reputational damage.
New liability insurance models will likely be adopted – in areas such as autonomous driving for example – increasing the pressure on manufacturers and software vendors and decreasing the strict liability of consumers.
Insurers are early AI adopters
The insurance industry has been an early adopter of machine learning as it deals with lots of data and repetitive processes.
“There is huge potential for AI to improve the insurance value chain. Initially, it will help automate insurance processes to enable better delivery to our customers. Policies can be issued, and claims processed, faster and more efficiently,” Bruch explains.
By boosting data analytics AI will also give insurers and their customers a much better understanding of their risks so that they can be more effectively reduced, while new insurance solutions could also be developed.
For example, AI-powered analytics could help companies better understand cyber risks and improve security. At the same time the technology could assist insurers in identifying accumulations of cyber exposure. Last but not least, AI will change the way insurers interact with their customers, enabling 24/7 service.