According to research house, McKinsey, while 64% of workforce report using generative AI (GenAI) tools, it is not yet impacting the value at an enterprise level. While a large majority are in experimental stage, often piloting use, this has not translated into value at an EBIT level says McKinsey.

Predicted to be used where companies are looking to drive growth an innovation, Dariel solutions architect Sasha Slankamenac says that while usage will increase and value will become clearer, enterprises need to be aware, and prepared for the risks associated with the use of GenAI tools within the organisation.

While adoption has not been widespread, the growth has still outpaced the development of data governance and security measures. “This brings with it data sovereignty issues amongst others,” says Slankamenac. He says it would make sense to develop frameworks before the growth escalates, and the risks do too.

One of the first issues he says that organisations need to be aware of is platform fragility. Introduced because of the use of GenAI tools, platform fragility creates new vulnerabilities and amplifys existing risks across data, security, governance, and operations. This fragility stems from an overreliance on unproven technology and the integration of disparate tools without a cohesive strategy or adequate security guardrails.

Slankamenac says that many organisations plug GenAI into systems through quick API wrappers or plugins: “These create technical debt that’s difficult to maintain when model versions, data schemas or licensing terms change. Gartner and others warn that by 2030, unmanaged GenAI integrations will rival legacy systems as a major source of technical debt and upgrade delays.”

To address these issues, organisations need to move from an ad-hoc adoption approach to a strategic, governed framework. The countermeasures would include:

  • Strong Governance and Policy: Establishing a clear governance framework that defines acceptable use, security policies, and incident response plans is essential from the outset.
  • Risk-Based Security Measures: Implementing guardrails such as input validation, AI firewalls, and robust access controls to combat prompt injections and other attacks.
  • Human Oversight and Validation: Maintaining human oversight (“human-in-the-loop”) for critical decisions and validating all AI outputs against reliable sources to mitigate hallucinations and biases is crucial.
  • Developer Training and Red Teaming: Training developers to identify and mitigate AI-generated code vulnerabilities and conducting “red teaming” exercises to proactively find security gaps.
  • Strategic Integration: Approaching GenAI as an integrated “layer” within the existing IT infrastructure, rather than a collection of fragmented tools, can ensure consistency and scalability.

Another risk is data leakage. Slankamenac says that employees often paste confidential or regulated information into public tools. “The Gartner 2025 forecast warns that 40 % of AI-related data breaches will stem from cross-border GenAI misuse by 2027.

While unintentional exposure is a key culprit other leakage risks include shadow AI – using unapproved GenAI tools; data retention – sharing info with a third-party AI platform and having it permanently stored and used; IP and compliance violations – leaked IP risks competitive disadvantage and malicious exploitation – exposed data can be exploited by attackers.

“Ignorance is rife when it comes to GenAI. Employees don’t understand how their inputs are stored leading to people inputting payroll, patent and other sensitive company information,” says Slankamenac. He says that mitigation strategies are vital to counteract the risks.

It could be as simple as establishing governance, using secure platforms, train employees and ensure there is sufficient monitoring and control.