The adoption of artificial intelligence (AI) is accelerating across the business landscape as organisations aim to reap its potential benefits.
However, recent incidents involving AI leaking corporate data show that embracing AI comes with a risk, as employees using AI tools at work might become involuntarily threat actors.
“As generative AI tools become deeply embedded in the workplace, the security risks stemming from employee misuse — intentional and accidental — are escalating,” says Zilvinas Girenas, AI security expert at AI orchestration platform nexos.ai.
“Data breaches and leaks of sensitive information cause reputational damage, thus, many companies are torn between enablement and banning the use of AI, which creates friction between employee productivity and security.”
According to Girenas, employees can unintentionally cause cyber threats when using AI tools for three key reasons:
- Data exposure. Employees might input sensitive or confidential company data into AI tools, especially cloud-based generative AI platforms, without realising that these inputs could be stored, analysed, or even used to train models. This can lead to unintentional data leaks.
- Shadow AI usage. If employees use AI tools that haven’t been approved by the organisation’s IT or security teams, shadow IT is introduced. These unvetted platforms may lack the necessary security controls, compliance certifications, or data governance protections, creating blind spots in risk management.
- Prompt injection or model manipulation. AI tools can be vulnerable to prompt injection and data poisoning attacks. If employees rely on outputs from compromised AI models or bots, they could act on manipulated or malicious advice, for example harmful instruction in automated workflows, leading to potential damage or breaches.
“In today’s digital age, enhancing workflows with corporate data input shouldn’t come at the cost of security; however, without the right protection in place, it often does,” says Girenas.
To mitigate human-fueled AI vulnerabilities and secure the modern workplace, the following should be considered:
- Clear policy enforcement. Consistent implementation and communication of guidelines related to how employees are allowed to use which approved AI tools.
- Employee training. Educating staff on the safe and ethical use of AI tools.
- Robust governance. Implementing smart guardrails that allow safe and compliant AI adoption without stifling productivity.
“Having a secure and structured approach to adopting various AI tools empowers organizations to tap into the full potential of AI. It not only enhances productivity and efficiency across teams but also ensures that progress doesn’t come at the cost of cybersecurity or compliance,” says Girenas.