Microsoft has released Cyber Pulse, a new digital briefing for business leaders that examines how the security landscape is evolving with AI.

The briefing focuses on how organisations are deploying AI agents and what it takes to secure, govern, and scale them responsibly.

Microsoft’s new research finds that 80% of the Fortune 500 is deploying active agents built with low-code/no-code tools. This signals a major shift: AI agents are no longer the domain of specialists – they’re an integral part of operations available for anyone to use. The catch: some are sanctioned by IT, others are not. Many agents are unsanctioned, unobserved, or over-privileged.

“As AI adoption accelerates, too few leaders have visibility into the agents operating across their enterprise,” says Kerissa Varma, chief security advisor at Microsoft Africa.

“Unsupervised or ungoverned agents can quickly escalate cyber and business risk, threatening security, business continuity, and reputation.

“AI agents bring enormous opportunity, but without proper oversight, even one agent’s risky behaviour can amplify internal threats and create new failure modes organisations are unprepared to manage.”

This Cyber Pulse brief outlines why leaders should demand visibility into their agents, and how to ensure the safe and trustworthy implementations of agents into their organisations. A few additional findings from the briefing include:

  • AI agent adoption is accelerating across all regions with EMEA accounting for approximately 42% of all active agents globally.
  • AI agents are scaling at pace across all industries; with financial services, manufacturing, and retail leading in agent adoption. Financial services, including banking, capital markets, and insurance, now represents about 11% of all active agents worldwide. Manufacturing accounts for 13% of global agent usage, showing widespread adoption in factories, supply chains, and energy operations. Retail represents 9%, with agents used to improve customer experience, inventory management, and frontline processes.
  • Only 47% of organisations report having GenAI-specific security controls in place.
  • 29% of employees admit to using unsanctioned AI agents at work.

Rapidly deploying AI agents without strong oversight can outpace security and compliance controls, creating opportunities for shadow AI and increasing the risk that agents with too much access or wrong instructions become unintended “double agents”, says Varma.

“Organisations urgently need effective governance and security to safely adopt agents, promote innovation, and reduce risk. Just like human users, AI agents must be protected with strong observability, governance, and Zero Trust principles,” says Varma.

In the same way organisations secure human employees, Zero Trust for agents requires:

  • Least privilege access: Give every user, AI agent, or system only what they need, no more.
  • Explicit verification: Always confirm who or what is requesting access using identity, device health, location, and risk level.
  • Assume compromise can occur: Design systems expecting that attackers will get inside.

 

Getting the most out of AI agents

Frontier firms are using the AI wave to modernise governance, reduce unnecessary data exposure, and deploy enterprise‑wide controls. They’re also driving a cultural shift; business leaders may set the AI vision, but IT and security teams are now equal partners in observability, governance, and safe experimentation.

It starts with observability, as you can’t protect what you can’t see and you can’t manage what you don’t understand. Observability is having a control plane across all layers of the organisation, including IT, security, developers, and AI teams to understand what agents exist, who owns them, what systems and data they touch, and how they behave.

Observability includes five core areas:

  • Registry: A centralised registry acts as a single source of truth for all agents across the organisation and helps prevent agent sprawl, enables accountability, and supports discovery while allowing unsanctioned agents to be restricted or quarantined when necessary.
  • Access control: Each agent is governed using the same identity‑ and policy‑driven access controls applied to human users and applications.
  • Visualisation: Real‑time dashboards and telemetry provide insight into how agents interact with people, data, and systems. Leaders can see where agents are operating, understanding dependencies, and monitoring behaviour and impact, supporting faster detection of misuse, drift, or emerging risk.
  • Interoperability: Agents operate across Microsoft platforms, open‑source frameworks, and third‑party ecosystems under a consistent governance model. This interoperability allows agents to collaborate with people and other agents across workflows while remaining managed within the same enterprise controls.
  • Security: Built‑in protections safeguard agents from internal misuse and external threats. Security signals, policy enforcement, and integrated tooling help organisations detect compromised or misaligned agents early and respond quickly, before issues escalate into business, regulatory, or reputational harm.

With AI adoption accelerating, this level of end‑to‑end visibility and governance is essential to maintaining control, which is why Microsoft developed Agent 365 to transform the enterprise wide need for transparency and oversight into a practical, scalable capability‑wide need for transparency and oversight into a practical, scalable capability.

Agent 365 is Microsoft’s unified control plane for managing AI agents across an organisation. It provides a centralised, enterprise-grade system to register, govern, secure, observe, and operate AI agents, whether they are built on Microsoft platforms, opensource frameworks, or third-party systems.

This unified control plane provides the strategic foundation organisations need to align teams and accelerate their AI journey responsibly.

“Enterprises that will lead in the next phase of AI adoption are those that move fast and bring business, IT, security, and developers together to observe, govern, and secure their AI transformation,” concludes Varma.