Reconfigurable acceleration stack boosts cloud's compute efficiency

November 16, 2016 // By Julien Happich
Xilinx unveiled a new suite of technology designed to enable the world’s largest cloud service providers to rapidly develop and deploy acceleration platforms.

Designed for cloud scale applications, the FPGA-powered Xilinx Reconfigurable Acceleration Stack includes libraries, framework integrations, developer boards, and OpenStack support. Xilinx claims it provides the fastest path to realize 40x better compute efficiency with its FPGAs compared to x86 server CPUs and up to six times the compute efficiency over competitive FPGAs. Using dynamic reconfiguration, the stack enables silicon optimization for the broadest set of performance-demanding workloads including machine learning, data analytics, and video transcoding. These workload optimizations can be done in milliseconds by swapping in the most optimal design bitstream.

Xilinx says its FPGAs enable hyperscale data centers to achieve 2-6x the compute efficiency in machine learning inference because DSP architectural advantages for limited precision data types, superior on-chip memory resources, and greater than one year technology lead over FPGA competition. 

The Xilinx Reconfigurable Acceleration Stack includes math libraries designed for cloud computing workloads, application libraries integrated with major frameworks, such as Caffe for machine learning, a PCIe-based development board and reference design for high density servers, and an OpenStack support package making Xilinx FPGA-based accelerators easy to provision and manage. Visit Xilinx at www.xilinx.com