Kathy Gibson reports from Gartner Symposium in Cape Town – The new generation of artificial intelligence (AI) will be capable of learning as they go along, and should avoid some of the disasters of the past where automated scripts caused things like stock market crashes.

Steve Prentice, vice-president and Gartner fellow, explains that the AI systems being produced today are capable of learning and are no longer constrained by answering only the questions, or following just the scripts, programmed into them.

“Yes, they are capable of avoiding many of those problems,” he says. “They are able to discover the answers, and identify where they don’t have the information they need.”

One problem, though, may be the legacy systems that are still running. “You will have simple systems alongside very advanced ones. The advanced systems probably won’t make bad decisions, but the stupid smart systems might.”

Prentice doesn’t believe that the increasing use of AI will contribute to a deepening digital divide.

“For the vast majority of people the technology itself is beyond our full understanding – you could spend a lifetime and still be learning,” he says. “The new digital divide will be those who are able to take advantage of and trust the machines versus those who don’t.”

In fact, the new systems place AI in the hands of just about every human being. “We all have access to very advanced AI systems because we all have smartphones, and very advanced AI systems are now more accessible than ever before.”

The new barriers, says Prentice, will be around issues of preparedness and comfort level – and this comes down to trust.

On the business front, the availability of more and better information raises some uncomfortable challenges. “”Business people have been asking for better data for some time, so they can make better decisions,” says Prentice.

“IT is not giving them that data, and sometimes the business leaders don’t like it. Sometimes getting all the facts to make better decisions undermines people’s perception of their own power and influence.”

The question of ethics and legality is a burning issue when it comes to AI, and this will need to be resolved sooner rather than later.

“Organisations need to decide on this soon,” Prentice says. “What is becoming clear now is that even if people legally own the data, ethically they could come unstuck if they use it in certain ways.

“There are all sort of issues around privacy, oversight and Big Brother,” he adds. “The same data could be used for useful things, or for massive oversight of civil liberties.”