Gartner has identified six steps to help organizations reduce the risks of AI agent sprawl.

The researcher predicts that, by 2028, an average global Fortune 500 enterprise will have over 150 000 agents in use, up from less than 15 in 2025 – which will generate significant agent sprawl, IT complexity and management challenges.

Max Goss, senior director analyst at Gartner, says: “As CIOs and IT leaders see an explosion of AI agents across their organisations, many are contending with an ungoverned sprawl of agents that expose their organisations to a range of risks, including misinformation, oversharing and data loss.

“Many organisations resort to blocking or restricting the use of AI agents, but this is not a long-term solution. If employees are unable to work in the sanctioned tools, they will likely go around the organisation’s controls and start using shadow AI which presents far greater risks.

“They need to find a balance where they can govern agents and manage sprawl, but also safely empower employees to innovate with these tools.”

Gartner has identified six steps to help CIOs and IT leaders establish governance and guardrails to reduce the risks of agent sprawl.

  • Establish agent governance and policies: Set clear rules for when and how agents are built, who can create and share them, and what connectors are permitted.
  • Build centralised agent inventory: Organisations can use AI trust, risk, and security management (AI TRiSM) tools to help discover and categorize agents across applications, both from sanctioned tools, and from shadow AI solutions. Once organisations have an agent inventory, they can start to build adaptive controls to enforce the right policies based on the level of risk the agent presents.
  • Define agent identity, permissions and life cycle model: Manage the agent identity, permission model and access controls, review, and retire redundant agents to prevent uncontrolled sprawl.
  • Develop AI information governance: Govern what information the AI tool or agent has access to and ensure that there is a process in place to keep the data current, manage its permissions to prevent oversharing, and archive the data when it is obsolete.
  • Monitor and remediate agent behaviour: Establish ongoing visibility into agent usage, ensure policy compliance, detect anomalous behaviour, and correct agents that exceed their intended scope or risk tolerance.
  • Foster a culture of responsible AI usage: Support the workforce with training programs and a community of practice to drive adoption and amplify best practices on agent management across the organisation.