AMD has announced the AMD Alveo UL3524 accelerator card, a new fintech accelerator designed for ultra-low latency electronic trading applications.

Already deployed by leading trading firms and enabling multiple solution partner offerings, the Alveo UL3524 provides proprietary traders, market makers, hedge funds, brokerages, and exchanges with a state-of-the-art FPGA platform for electronic trading at nanosecond (ns) speed.

The Alveo UL3524 delivers a 7X latency improvement over prior generation FPGA technology1, achieving less than 3ns FPGA transceiver latency2 for accelerated trade execution.

Powered by a custom 16nm Virtex UltraScale+ FPGA, it features a novel transceiver architecture with hardened, optimised network connectivity cores to achieve its performance. By combining hardware flexibility with ultra-low latency networking on a production platform, the Alveo UL3524 enables faster design closure and deployment compared to traditional FPGA alternatives.

“In ultra-low latency trading, a nanosecond can determine the difference between a profitable or losing trade,” says Hamid Salehi, director of product marketing at AMD. “The Alveo UL3524 accelerator card is powered by the lowest latency FPGA transceiver from AMD–purpose-built to give our fintech customers an unprecedented competitive advantage in financial markets.”

Featuring 64 ultra-low latency transceivers, 780K LUTs of FPGA fabric, and 1,680 DSP slices of compute, the Alveo UL3254 is built to accelerate custom trading algorithms in hardware, where traders can tailor their design to evolving strategies and market conditions.

Supported by traditional FPGA flows using Vivado Design Suite, the product comes with a suite of reference designs and performance benchmarks that allow FPGA designers to quickly explore key metrics and develop custom trading strategies to specification, backed by global support from AMD domain experts.

To simplify the increasing adoption of AI in the algorithmic trading market, AMD is providing developers with the open-sourced and community-supported FINN development framework. By using PyTorch and neural network quantisation techniques, the FINN project enables developers to reduce the size of the AI models while retaining accuracy, compiling to hardware IP, and integrating the network model into the algorithm’s datapath for low latency performance. As an open-source initiative, the solution gives developers flexibility and accessibility to the latest advancements as the projects evolve.

The Alveo UL3524 and purpose-built FPGA technology are enabling strategic partners to build custom solutions and infrastructure for the fintech market. Currently available partner solutions include offerings from Alpha Data, Exegy, and Hypertec.