As autonomous AI agents enter commerce, consumers are warned that it is scaling faster than trust, accountability and awareness.
According to Miguel Fornes, information security manager at Surfshark, the unprecedented acceleration of agentic AI is entering a new and potentially risky phase: agentic commerce, often marketed as agentic shopping.
“We are witnessing the largest technological war humanity has ever seen unfold right before our eyes. In 2025, more money and resources were poured into AI-related ventures than the US and the USSR spent during the entire space race that culminated in Apollo 11 landing on the Moon,” says Fornes. “The difference is that this time, the battlefield is the consumer’s browser, inbox, and bank account.”
AI-generated bots and deepfakes are already flooding the Internet – sometimes merely annoying, but increasingly dangerous when exploited by cybercriminals. Agentic AI systems dramatically amplify this risk by automating the entire process end-to-end.
“Imagine a tool that doesn’t just write a spam email, but also creates a fake profile, chats convincingly in real time, and carries out online banking operations – all without a human ever touching a keyboard,” Fornes says.
Unlike traditional AI assistants that respond to prompts, agentic systems are designed to act independently: browsing websites, logging into accounts, making decisions, and executing transactions.
As competition intensifies, tech companies are racing to release agentic tools at exceptionally low cost – or free – often branding them as personal assistants for everyone.
“A human executive assistant is vetted, trusted, and – most importantly – can be sued if they steal your identity,” Fornes points out.
Agentic AI systems, by contrast, operate without legal liability, moral judgment, or contextual understanding.
“Using an experimental agentic AI to book your vacation is the digital equivalent of handing your unlocked phone and wallet to a stranger on the street holding a sign that says, ‘I’m good at finding cheap flights,’” he adds. “Would you trust that random guy? I certainly wouldn’t.”
While often described as productivity tools, agentic AI systems are fundamentally different in nature. They are capable of executing actions across personal devices and accounts – sometimes with unintended consequences.
“Agentic AI is not just a tool – it’s an extremely sharp and powerful one,” says Fornes. “If you give it unrestricted access to your computer to ‘optimize your workflow,’ you might come back to find it deleted your family photos to save space, because technically, it did optimise your storage.”
Unlike human assistants, these systems cannot reliably distinguish between sensitive, personal, or irreversible actions. In effect, consumers are unknowingly testing experimental autonomous systems on their real lives: “You are essentially beta-testing extremely powerful technology with your actual life,” Fornes warns.
Many agentic shopping and productivity tools require deep access to emails, calendars, browsers, and financial services. While marketed as convenience, this level of access introduces significant privacy risks.
“When you ask an agentic AI to handle your emails or manage your calendar, you’re opening the front door to your private life,” he cautions.
Despite rapid deployment, agentic AI systems remain prone to hallucinations and lack enforceable boundaries – raising concerns among privacy and security experts.
“Until this technology stops hallucinating and starts understanding boundaries, using it for critical tasks is like playing Russian roulette with your privacy settings.”