IBM is contributing three open-source projects – Docling, Data Prep Kit and BeeAI – to the Linux Foundation.

“We’re continuing our long history of contributing open-source projects to ensure that they’re easy to consume and that it’s easy for others – not just us – to contribute,” says Brad Topol, IBM distinguished engineer and director of open technologies. Topol also chairs the Governing Board of the LF AI & Data Foundation, a group hosted under the Linux Foundation focused on advancing open-source innovation across artificial intelligence (AI) and data technologies.

Each project is focused on an essential part of the AI development stack. As the industry matures, innovation driven by the broader developer community in these areas is key to making AI enterprise-ready.

Docling, which launched and open-sourced a year ago, addresses a limit that many foundation models have for enterprise use. While the models have been trained on every scrap of publicly available information, much of the data valuable to businesses lies in documents that are not accessible online: PDFs, annual reports, slide decks.

Docling streamlines the process of turning unstructured documents into JSON and Markdown files that are easy for large language models (LLMs) and other foundation models to digest.

Since its release, Docling has gained traction, earning more than 23 000 stars on GitHub. When combined with retrieval-augmented generation (RAG) techniques, Docling improves LLM outputs.

“Docling can make the LLMs answer much better and much more specific to their needs,” says Topol.

In addition to gaining traction in the open-source community, Docling helps power Red Hat Enterprise Linux AI, where it enables context aware chunking and supports the platform’s new data ingestion pipeline.

Another critical step in deploying AI is data preparation. IBM’s Data Prep Kit, which was released in 2024, has also gained popularity: it helps clean, transform and enrich unstructured data for pre-training, fine-tuning and RAG use cases.

Unstructured data – such as databases, web pages and audio files which are more complex to parse and extract insights – accounts for 90% of all enterprise-generated data, according to IDC. LLMs can analyze vast amounts of unstructured data and extract relevant insights to generate and test new product or service ideas, for instance, in hours rather than months.

Data Prep Kit is designed to simplify data prep for LLM applications – currently focused on code and language models – supporting pre-training, fine-tuning and RAG use cases. Built on familiar distributed processing frameworks like Spark and Ray, it gives developers the flexibility to create custom modules that scale easily, whether running on a laptop or across an entire data center.

“We used to say, garbage in, garbage out. You definitely want good data going in,” Topol says. “This is not a glamorous project compared to some of the other parts of the LLM life cycle, but it’s incredibly critical, incredibly valuable and a definite must-have.”

Data Prep Kit is beginning to power IBM offerings and is now in IBM’s TechPreview of IBM Data Integration for Unstructured Data.

Finally, as agents are gaining traction, IBM released BeeAI. BeeAI can be used by developers to discover, run and compose AI agents from any framework, including CrewAI, LangGraph, and AutoGen.

The project includes the Agent Communication Protocol, which powers agent discoverability and interoperability, and the BeeAI-framework, its native framework for building agents in Python or TypeScript, optimized for open source models.

“There are other frameworks for building agents,” says Topol. “But what’s nice about BeeAI is that it provides a platform where you can also plug in agents from those other technologies. BeeAI doesn’t just work with its own agents.”

By contributing these projects to the Linux Foundation, IBM aims to expand their reach and attract new contributors and users. “The projects are in a wonderful spot where people can invest their resources. It makes a huge difference,” says Topol. “It’s like an insurance policy. The open governance also makes people feel better that if they contribute, over time, they’re going to earn their stripes through what we call meritocracy and earn a more influential role in the project. They can also feel secure that the project won’t make any drastic open-source license changes that could dramatically impede future use of the project.”

 

Article by Anabelle Nicoud, tech reporter at IBM