Gartner predicts that, by 2027, organisations will implement small, task-specific AI models, with usage volume at least three times more than those of general-purpose large language models (LLMs).
While general-purpose LLMs provide robust language capabilities, their response accuracy declines for tasks requiring specific business domain context.
“The variety of tasks in business workflows and the need for greater accuracy are driving the shift towards specialized models fine-tuned on specific functions or domain data,” says Sumit Agarwal, vice-president analyst at Gartner. “These smaller, task-specific models provide quicker responses and use less computational power, reducing operational and maintenance costs.”
Enterprises can customize LLMs for specific tasks by employing retrieval-augmented generation (RAG) or fine-tuning techniques to create specialised models. In this process, enterprise data becomes a key differentiator, necessitating data preparation, quality checks, versioning and overall management to ensure relevant data is structured to meet the fine-tuning requirements.
“As enterprises increasingly recognize the value of their private data and insights derived from their specialized processes, they are likely to begin monetising their models and offering access to these resources to a broader audience, including their customers and even competitors,” says Agarwal. “This marks a shift from a protective approach to a more open and collaborative use of data and knowledge.”
By commercialising their proprietary models, enterprises can create new revenue streams while simultaneously fostering a more interconnected ecosystem.
Enterprises looking to implement small task-specific AI models must consider the following recommendations:
- Pilot Contextualised Models: Implement small, contextualised models in areas where business context is crucial or where LLMs have not met response quality or speed expectations.
- Adopt Composite Approaches: Identify use cases where single model orchestration falls short, and instead, employ a composite approach involving multiple models and workflow steps.
- Strengthen Data and Skills: Prioritise data preparation efforts to collect, curate and organize the data necessary for fine-tuning language models. Simultaneously, invest in upskilling personnel across technical and functional groups such as AI and data architects, data scientists, AI and data engineers, risk and compliance teams, procurement teams and business subject matter experts, to effectively drive these initiatives.