By Barry Buck – Organised crime has better AI adoption than your bank. I wish that were a joke.
Anthropic recently disclosed that hackers weaponised Claude to breach at least 17 organisations – including government bodies – using what they called “vibe hacking”.
The AI didn’t just write exploit code. It made strategic decisions about which data to steal, crafted psychologically targeted extortion demands, and even suggested ransom amounts. North Korean operatives used Claude to build fake profiles, land remote jobs at Fortune 500 companies, and write code once inside. Gartner reports eight out of 10 senior risk executives now rank AI-powered cyberattacks as the top emerging threat.
Professor Moriarty has upgraded to Claude Code and he’s not waiting for your compliance team to finish their risk assessment.
Now look at the good guys. A recent Hawk and Chartis study found that 89% of compliance leaders say their institution encourages AI use. Sounds promising – until you read the detail.
Only a third of banks use AI at scale for fraud prevention. AML monitoring? 22%. Sanctions screening? 16%. Regulatory reporting – arguably the most tedious, automatable function in the building – sits at 9%. The rest? “Individuals relying on AI on an ad hoc basis” – corporate speak for someone quietly feeding spreadsheets into ChatGPT and hoping nobody from group technology notices.
I’ve beaten this drum before: the compliance dogma paralysing enterprise AI adoption is not protecting organisations. It’s protecting the status quo while the threat landscape accelerates around them.
The solution isn’t complicated. Sandbox your internal data. Strip personal information before it touches cloud AI providers like Anthropic, OpenAI, or Gemini. Build a controlled pipeline that satisfies your POPIA obligations while actually letting your people use the tools that criminals are already using against them. It’s a viable, auditable architecture. I know, because we build exactly this kind of thing on Roboteur.
But here’s the rub: decisions were made about the fate of Arrakis in the Dune novels faster than most enterprise corporations would agree to even a stringent, fully compliant sandbox model.
By the time the steering committee has scheduled the second round of stakeholder alignment workshops, Moriarty has exfiltrated your customer database, extorted your board, and is funding his next campaign – all with agentic AI clones running autonomously.
Meanwhile, Sherlock and Watson are still manually digging through Excel spreadsheets for clues.
Barry Buck is the chief technology officer of Saucecode and Roboteur architect
www.saucecode.tech