Researchers from Intel Labs and the Weizmann Institute of Science have introduced a major advance in speculative decoding.
The new technique, presented at the International Conference on Machine Learning (ICML), in Vancouver, enables any small “draft” model to accelerate any large language model (LLM) regardless of vocabulary differences.
“We have solved a core inefficiency in generative AI. Our research shows how to turn speculative acceleration into a universal tool,” says Oren Pereg, senior researcher: natural language processing group at Intel Labs. “This isn’t just a theoretical improvement; these are practical tools that are already helping developers build faster and smarter applications today.”
Speculative decoding is an inference optimization technique designed to make LLMs faster and more efficient without compromising accuracy. It works by pairing a small, fast model with a larger, more accurate one, creating a “team effort” between models.
How speculative decoding works
Consider the prompt for an AI model: “What is the capital of France…”
A traditional LLM generates each word step by step. It fully computes “Paris,” then “a”, then “famous”, then “city” and so on, consuming significant resources at each step.
With speculative decoding, the small assistant model quickly drafts the full phrase “Paris, a famous city…” The large model then verifies the sequence.
This dramatically reduces the compute cycles per output token.
This universal method by Intel and the Weizmann Institute removes the limitations of shared vocabularies or co-trained model families, making speculative decoding practical across heterogeneous models. It delivers performance gains of as much as 2,8-times faster inference without loss of output quality.
It also works across models from different developers and ecosystems, making it vendor-agnostic; it is open source ready through integration with the Hugging Face Transformers library.
In a fragmented AI landscape, this speculative decoding breakthrough promotes openness, interoperability and cost-effective deployment from cloud to edge. Developers, enterprises and researchers can now mix and match models to suit their performance needs and hardware constraints.
“This work removes a major technical barrier to making generative AI faster and cheaper,” says Nadav Timor, PhD student in the research group of Professor David Harel at the Weizmann Institute. “Our algorithms unlock state-of-the-art speedups that were previously available only to organizations that train their own small draft models.”
The research paper introduces three new algorithms that decouple speculative coding from vocabulary alignment. This opens the door for flexible LLM deployment with developers pairing any small draft model with any large model to optimise inference speed and cost across platforms.
Importantly, the research isn’t just theoretical. The algorithms are already integrated into the Hugging Face Transformers open source library, making advanced LLM acceleration is available out of the box with no need for custom code.