As the chief technology officer (CTO) of an RPA and hyperautomation company serving large banks and financial institutions, I’ve observed firsthand the distinct challenges of implementing generative AI in environments where on-site capabilities are mandatory.
Unlike enterprises that can leverage cloud solutions, our clients operate within stringent security and privacy boundaries that make the development and deployment of AI solutions a fundamentally different proposition.
The real challenge: Onsite AI capabilities
The article “Developing AI solutions is anything but simple” rightly highlights skill gaps and tool complexity in AI development.
However, for institutions like banks, the core problem isn’t a plethora of tools but the lack of tailored, on-site solutions. Cloud-based generative AI models are often incompatible with strict data residency and security requirements.
This is where innovations like Meta’s LLama Vision models have shown promise, particularly in document intelligence.
At my company, we’ve integrated LLama models into Roboteur, our hyperautomation platform, to enhance OCR, data validation, and transformation – all on-site.
Roboteur’s architecture emphasises affordability and flexibility, with custom plugins designed to meet niche client needs. By addressing industry-specific challenges and optimizing on-premise capabilities, we’ve delivered significant ROI without sacrificing compliance or security.
The overhyped AI ecosystem
While AI offers transformative potential, it’s crucial to temper expectations. Enterprises are often seduced by the hype around “seamless” and “autonomous” AI solutions without understanding their real-world limitations.
Generative AI is not a one-size-fits-all solution; it requires significant customisation to deliver value in specialised environments.
Furthermore, the monopoly of large AI models, coupled with heavy hardware requirements, places a disproportionate burden on developers working in constrained environments.
Developer tools: The right fit, not more tools
Contrary to the notion that developers are overwhelmed by tools, our experience suggests the issue is finding the “right” tools for specific problems. For example, Claude’s Sonnet 3.5 model has been a game-changer for custom development, providing nuanced, context-aware outputs for complex scenarios.
However, even advanced tools like Claude have limitations when scaled to enterprise-specific use cases.
The focus should shift to tools that prioritise modularity and interoperability. Our success with Roboteur stems from enabling developers to build bespoke automation workflows that seamlessly integrate AI, RPA, and traditional IT systems.
By empowering developers to use what they need – and nothing they don’t – we’ve reduced friction and accelerated adoption.
Breaking the hardware barrier
The heavy hardware requirements of leading AI models remain a stumbling block for many organisations. This is particularly true in on-premise setups where enterprises cannot rely on elastic cloud computing resources.
Efforts to optimise AI for commodity hardware or develop scalable hybrid solutions will be pivotal in democratising access to AI’s benefits.
Looking ahead: Pragmatism over hype
Generative AI is undeniably a powerful tool, but its value lies in pragmatic application rather than chasing hype. For industries like banking, where security and precision are paramount, the path forward involves:
- Enhancing on-site capabilities: Focus on robust, localised models like LLama Vision.
- Simplifying customisation: Leverage platforms like Roboteur that prioritize modularity and integration.
- Democratising hardware: Invest in optimising AI for cost-effective infrastructure.
Enterprises must balance ambition with realism, understanding that success in AI development isn’t about following trends but solving meaningful, industry-specific challenges.