Kathy Gibson is at Gartner Symposium in Cape Town – Technology will impact the way we all do business, and there are some trends that organisations cannot afford to ignore.

Brian Burke, research vice-president at Gartner, points out that the technologies that will make the most change centre predominantly around people and the smart spaces they occupy.

The people-centric technologies are hyper automation, multi-experience, democratisation, human augmentation, and transparency and traceability

Technologies relating to smart spaces are empowered edge, autonomous things, distributed cloud, practical blockchain and artificial intelligence (AI) security.

 

Hyper automation

The goal for most organisations is to automate anything that can be automated, Burke points out.

In fact, the number one use case now for AI is process automation, and these applications will become more sophisticated over time.

The path to hyperautomation continues to move forward from simple automation to where, today, we have added task automation and event automation. We are moving towards adding process automation and conversational UXs; and will soon automate business operations (BusOps) and digital operations (DigitalOps).

 

Multi-experience

By 2021 at least one-third of enterprise will have deployed a multi-experience development platform to support mobile, Web, conversational and augmented reality development.

This will include various touchpoints, Burke explains. Immersive environments will change the way we perceive and interact with the world. It could be useful in applications like field service, training and collaborative design.

To get there, we have moved from the static Web of the early 2000s to the mobile experiences of the 2010s. The landscape will now move to the multi-experience with conversational, immersive and sensory experiences.

Multi experience includes multi-sensory and multi-modal platforms.

Sensory inputs are from the user in the form of taste, sight, sound and touch, as well as senses from outside like radar, humidity, location and emotion.

The multi-modal environment embraces watches, phone, PCs and more in various locations.

 

Democratisation

By 2022, 30% of organisations using AI for decision-making will contend with shadow AI as the biggest risk to effective and ethical decisions.

Burke describes shadow AI as AI that is outside of the control of IT, and is more at risk than the systems in IT’s control – although these can be compromised too.

Issues driving AI democratisation include frameworks and algorithms that empower developers; hosted AI services through APIs that could deal with insufficient data; and products in the form of software which could lead to shadow AI. These are all amplified by a lack of AI skills within IT and the organisation at large.

Democratisation embraces everyone within the organisation, and CIOs need to think carefully about how it will affect the company. This will be in the emergence of citizen process automation, citizen data scientists and citizen development through accessible technology.

They will be utilising tools that include virtual assistants, predictive analytics, and the automation of processes and applications.

 

Human augmentation

By 2025, 40% of enterprises will shift from designing for humans to architecting humans themselves by adopting human augmentation technologies and methodologies.

Burke explains that physical augmentation will present in many different ways, for instance, in prosthetics and RFID tagging.

Indeed, much of this technology exists now, Burke says. What is inhibiting it is largely social resistance to some of the technologies.

Cognitive augmentation in the virtual world is also starting to become mainstream. Use cases include AI in training; automating routine tasks, but handing over to humans for the final decision; AI built into some processes using robotic process automation and which push processes back and forth between humans and machines; and humans and machine working together, using AI to achieve better results.

Many of the technologies exist today, but Burke believes that the tipping point for human augmentation will come with social acceptance.

“This is the big issue,” he says. “We have the technology: we could do it; we maybe should do it – but will we do it?

“The answer comes down to the ethical use of technology and leaders must consider not just the technology’s ability, but whether it is ethically responsible to deploy it.”

 

Transparency and traceability

By 2023, more than 75% of large organisations will hire AI specialists in behaviour, forensics, privacy and customer trust to reduce their brand and reputation risk.

The risk of damage to brand and reputation is going to become more of an issue going forward, and companies need to understand how to take ethical decisions about the technology they deploy, says Burke.

Currently, there is a trust crisis in IT and socially, rising from things like omnipresent IoT data collection, algorithmic bias, counterfeit reality, fake news and reviews, opaque algorithms, addictive applications, unauditable AI, and unauthorised data harvesting among others.

“CIOs have to think about which of the key principles of trust they need to apply,” Burke says.

The pillars of trust are integrity, ethics, openness, accountability, competence and consistency.

To enable them in an IT system, it needs to validate and test, have awareness, accountability and assurance, be able to provide an explanation, and data provenance must be available.

“These are some of the bigger issues that CIOs will have to deal with in future,” Burke says.

 

Empowered edge

Gartner predicts that, by 2023, more than 50% of enterprise-generated data will be created and processed outside the data centre or cloud.

This is because technology is enabling a new business edge, says Burke.

Edge devices in the Internet of Things (IoT) world are becoming more intelligent, which is improving security and reducing cost to drive new and better user experiences, smarter and safer systems, realtime responsiveness and resiliency.

Among the underlying technologies are processing products like DNN on a chip, low-cost CPUs, swarm computing, digital twin, LEO satellites, mm-wave wireless and more.

On the edge, the architecture is moving from a hierarchical to a matrix world, Burke adds.

 

Autonomous things

By 2025, it is expected that more than 12% of all newly produced vehicles will have a Level 3 or higher autonomous driving hardware capability.

Level 5 will be achieve in cars that drive on their own all the time, and Level 4 is where drivers don’t have to do anything – and this is still a few years off, says Burke.

The real challenge for autonomous things – be they cars, robots, drones or more – is that they need to perceive the environment around them – where the road is, where the other cars are, where objects are.

Following from this is the need for mobility, where they move safely within that world.

When it comes to robotics, manipulation must be added, where they can manipulate objects that might not be in a specific location.

For a truly autonomous environment, these devices will be able to collaborate with one another, Burke adds.

The world of autonomous things will be mature when robots, autonomous vehicles and drones are able to operate in an uncontrolled environment.

“We have the technology to do a lot of these things now – for instance autonomous vehicles in mines – but technological capability is still inhibited by legislation and social acceptance,” Burke points out.

 

Distributed cloud

By 2024, Gartner predicts that the majority of cloud service platform providers will offer services that execute at the point of need.

There are a number of reasons to do this, including low latency, privacy regulations and data residency.

This is driving a shift to the distributed cloud, Burke says.

The current model is the hybrid cloud which combines a service provider data centre with public cloud services, with an organisation-owned enterprise data centre providing private cloud services.

On-premise public cloud is where the public cloud service architecture is extended to the on-premise data centre and managed by a cloud service provider in a consistent way that reduces infrastructure requirements for the enterprise.

This eliminates a lot of the issues associated with hybrid cloud by allowing the cloud service providers to provide services on-site.

We will soon also see decentralised cloud services for IoT and the edge, in the form of far edge, near edge and edge services that bring processing to the point of need.

 

Practical blockchain

By 2023, it is expected that blockchain-inspired technology will support the global movement and tracking of $2-trillion worth of goods and services.

Burke points out that blockchain is a distributed ledger with immutable record, encryption, distributed consensus and tokenisation.

A factor inhibiting the early adoption of blockchain is that the compute power requirements to produce tokens have significant technical and management challenges, he adds.

Although there are full blockchain implementations in the form of cryptocurrencies, Gartner thinks we will only see complete commercial solutions by 2027 when management issues are resolved. Enhanced blockchain solutions will likely only emerge about 2030, Burke adds.

That said, there are some successful blockchain-inspired use cases now. The highest success rates are seen when the system is used to solve an immediate business problem, says Burke. These could include things like provenance, identity management and trade finance.

 

Artificial intelligence (AI) security

Gartner predicts that, by 2022, 30% of all cyber attacks will leverage training data poisoning, model theft and adversarial samples.

These all attack the AI or machine learning (ML) value chain and could be fatal for organisations.

The AI pipeline takes in new data that goes through data preparation, model training, result validation and production deployment.

There are opportunities for cyber attackers to intervene at the data preparation stage with training data poisoning.

Model theft happens where AI models are reconstructed through queries and results, effectively realigning the AI model by querying it.

Both of these attack vectors can be mitigated by defining who the people and computers are that are either feeding data or querying the model.

Adversarial samples are more difficult to identify and mitigate, Burke says. A relatively small sample can change the way a machine learning model will predict results and influence it to make an incorrect prediction.

“These security challenges are increasing, with the point of attack extended through IoT and connected systems, as well as more sophisticated threats,” says Burke.

Although organisations will leverage AI to fight security threats, the battleground will remain significant as cyber criminals can be counted on to have more tools at their disposal, he adds.