Technology continues to evolve at an unprecedented pace and predictions about coming trends are always a topic of debate – what’s real, what’s hype.
By Liz Centoni , executive vice-president, chief strategy officer and GM: applications at Cisco
While there’s always a wave of excitement around the next big thing, I think the last year has been different. Advancements in AI, especially generative AI (GenAI), are leading to a once-in-a-generation shift. This is opening vast new opportunities and transforming industries, modes of operation, and career paths.
Not surprisingly this year, I’ve got lots on AI with predictions that are filled with “Automated Inspiration”.
The Cisco AI Readiness Index revealed that 95% of respondents have an AI strategy in place or under development, but only 14% are fully ready to integrate AI into their business. What will it take for organisations to adopt and integrate AI? How can innovators leverage change to remain competitive? Where and how will innovation and trust intersect?
These insights and questions have inspired my predictions for coming tech trends in 2024.
GenAI will fast expand into the business world with GenAI-powered NLIs, customized LLMs, tailored B2B applications and business context
Natural language interfaces (NLIs) powered by GenAI will be expected for new products and more than half will have this by default by the end of 2024. GenAI will also be leveraged in B2B interactions with users demanding more contextualised, personalised, and integrated solutions.
GenAI will offer APIs, interfaces, and services to access, analyse, and visualise data and insights, becoming pervasive across areas such as project management, software quality and testing, compliance assessments, and recruitment efforts. As a result, observability for AI will grow.
We will also see the rise of specialised, domain-specific AI models and a shift to smaller, specialised LLMs with higher levels of accuracy, relevancy, precision and niche domain understanding. For instance, LLaMA-7B models – often used for code completion and few-shotting – will see increasing adoption.
Moreover, multi-modality combination of various data types such as images, text, speech, and numerical with intelligence processing algorithms will expand B2B use cases. This will result in better results in areas such as business planning, medicine, and financial services.
A movement for responsible, ethical use of AI will begin with clear AI governance frameworks that respect human rights and values
Adoption of AI is a once-in-a-generation technology shift and it is sitting at the intersection of innovation and trust. Yet, 76% of organisations don’t have comprehensive AI policies in place. There is mostly general agreement that we need regulations/policy and industry self-policing and governance to mitigate the risks from GenAI.
However, we need to get more nuanced, for example, in areas like IP infringement, where bits of existing works of original art are scraped to generate new digital art. This area needs regulation.
We must also ensure that consumers have access to and control over their data in the spirit of the recent EU Data Act. With the rising importance of AI systems, available public data will soon hit a ceiling and high-quality language data will likely be exhausted before 2026. Organisations need to shift to private and/or synthetic data which opens the possibility for unintended access and usage.
There is plenty that organisations can do on their own. Leaders must commit to transparency and trustworthiness around the development, use, and outcomes of AI systems. For instance, in reliability, addressing false content and unanticipated outcomes should be driven by organisations with RAI assessments, robust training of LLMs to reduce the chance of hallucinations, sentiment analysis and output shaping.
In 2024, we will see companies of every size and sector formally outline how responsible AI governance guides internal development, application, and use of AI. Until tech companies can credibly show they are trustworthy, you can anticipate governments creating more policies.
Consumers and companies will face increased risks from AI-generated disinformation, scams, and fraud, prompting tech companies and governments to work together for solutions
In 2024, AI-enabled disinformation, scams, and fraud will continue to grow as a threat to businesses, people, and even candidates and elections. In response, we’ll see more investments in detection and risk mitigation. Inclusive new AI solutions will guard against cloned voices, deepfakes, social media bots, and influence campaigns.
AI models will be trained on large datasets for better accuracy and effectiveness. New mechanisms for authentication and provenance will promote transparency and accountability.
In keeping with the G7 Guiding Principles on AI regarding threats to democratic values, the Biden administration’s Safe AI Executive Order, and the EU AI Act, we’ll also see more collaboration between the private sector and governments to raise threat awareness and implement verification and security measures.
We’ll see cooperation to sanction rogue actors and ensure regulatory compliance. Businesses must prioritise advanced threat detection and data protection, regular vulnerability assessments, updating security systems, and thorough audits of network infrastructures. For consumers, vigilance will be key to protecting identities, savings, and credit.
Quantum progress but not quantum leaps as the future of cryptography and networking will continue to take shape
We will see adoption of post-quantum cryptography (PQC) – even before it is standardised – as a software-based approach that works with conventional systems to protect data from future quantum attacks.
PQC will be adopted by browsers, operating systems, and libraries, and innovators will experiment by integrating it into protocols such as SSL/TLS 1.3 which governs classic cryptography.
PQC will also start to trickle down to enterprises as they aim to ensure data security in the post-quantum world.
Another trend will be the growing importance of quantum networking which in four or five years – perhaps more – will enable quantum computers to communicate and collaborate for more scalable quantum solutions. Quantum networking will leverage quantum phenomena such as entanglement and superposition to transmit information.
QKD as an alternative or a complement to PQC depending on the level of security and performance required, will also leverage quantum networking. Quantum networking will see significant new research and investment by government and financial services which have high demands for data security and processing.
Unleashing the future of AI-driven customization, enterprises will embrace the power and potential of API abstraction
In the year ahead, businesses will seek innovative ways to leverage the immense power and benefits of AI without the complexity and cost of building their own platforms.
Application programming interfaces (APIs) will play a pivotal role. APIs will increasingly act as an “abstraction layer” – seamless bridges that integrate a multitude of pre-built AI tools, services, and systems with little development or infrastructure setup. With access to a vast array of AI capabilities through APIs, teams will automate repetitive tasks, gain deeper insights from data, and enhance decision making.
This year will also mark the beginning of a race to API-driven customisation where organisations can choose and combine APIs from various providers, easily tailoring AI solutions to meet unique and novel requirements.
Flexibility and scalability will foster effortless collaboration with external AI experts, startups, and research institutions, fueling an exchange of ideas and breakthrough advancements. In fact, these curated “model garden” ecosystems are already taking shape and in 2024 we’ll see them really take off.
You can’t greenwash AI – advancements will drive even more energy usage while unlocking new energy networking and efficiency paradigms
Sustainable energy plays a vital role in addressing climate change. Selecting smaller AI models with fewer layers and filters specific to use cases, companies will begin to reduce energy consumption costs compared to general systems.
These dedicated systems are trained on smaller, highly accurate data sets and efficiently accomplish specific tasks. In contrast, deep learning models use vast amounts of data.
The fast-emerging category of energy networking, which combines the capabilities of software-defined networking and an electric power system made up of direct-current micro grids, will also contribute to energy efficiency.
Applying networking to power and connecting it with data, energy networking offers comprehensive visibility and benchmarking of existing emissions and an access point for optimising power usage, distribution, transmission, and storage.
Energy networking will also help organisations measure energy usage and emissions more accurately, automate many functions across IT, smart buildings, and IoT sensors, and unlock inefficient and unused energy. With embedded energy management capabilities, the network will become a control plane for measuring, monitoring, and managing consumption.
‘Shift left’ will yield collaboration, modern converged platforms, and a little help from AI, to reveal a new programming experience – and better software
As organisations continue to “shift left”, software development will change with novel tools, approaches, and technologies.
Programmers will leverage platforms and collaboration – and even a little help from AI – to centralise toolkits and unlock newfound efficiency so they can focus on delivering exceptional digital experiences. For instance, they’ll wield CNAPP, cloud security posture management (CSPM), and cloud workload protection platforms (CWPP) to combat tool sprawl, streamline workflows, and eliminate the burden of managing disjoined tools.
Some will continue struggling with disparate point solutions, leaving security gaps and software supply chain issues. Innovators will use AI to speed up delivery and handle tedious tasks like testing for defects and errors. Along the way, collaboration tools and AI assistants will be trusted companions as teams tackle the intricacies of security, observability, and infrastructure.
They will also use AI-derived insights to navigate the intricacies of components, protocols, and tools. Humans checks and balances must ensure AI-based decisions are fair, unbiased, and aligned with ethical and moral values. We believe AI should augment human decision making, not replace it entirely.
AI has emerged as both a catalyst and a canvas for the future. It’s already in our homes, our cars, our offices – and in our pockets. As we marvel at the progress we’ve made in a short time, we must also balance the benefits and the risks.
Trust between people and the AI systems and tools they use is fundamental and non-negotiable. That means providing clarity on what AI can and cannot do with new data transparency and responsibility frameworks, new efforts to educate people and businesses about how disruption might happen, teaching the skills that will be needed for new AI-enabling and -enabled jobs, and new ways to collaborate with the best interests of people at heart.
It’s an exciting time. I look to the year ahead with a sense of optimism and awe based on my deeply held belief that trust is the necessary ingredient for every new tech wave to take hold. What’s good for the world is good for business. Together, let’s advance the promise of AI with confidence.