The Open Neural Network Exchange (ONNX) launched last month by Microsoft and Facebook to increase interoperability and reduce friction for developing and deploying artificial intelligence (AI) is gaining traction.
AMD, Arm, Huawei, IBM, Intel and Qualcomm have all joined the initiative.
The ONNX format aims to give users more choice within AI frameworks, as every modeling project has its own special set of requirements that often require different tools for different stages.
Intel, along with others, is participating in the project to provide greater flexibility to the developer community by giving access to the most suitable tools for each unique AI project and the ability to easily switch between frameworks and tools.
Intel’s addition to the open ecosystem for AI will broaden the toolset available to developers through neon and the Intel Nervana Graph as well as deployment through the Intel Deep Learning Deployment Toolkit. neon will be compatible with other deep learning frameworks through the Intel Nervana Graph and ONNX, providing customers with more choices for frameworks and compatibility with the right hardware platform to fit their needs.
Currently, the ONNX format is supported by Microsoft Cognitive Toolkit, Caffe2 and PyTorch, with capabilities expanding over time. Through the increased interoperability and vast hardware and software ecosystem fostered by ONNX and Intel, developers can construct and train models at an accelerated pace to deliver new AI solutions.
Project Brainwave, Microsoft’s FPGA-based deep learning platform for accelerating real-time AI, will also support ONNX to help customers accelerate models from a variety of frameworks.