Zebra acceleration for Xilinx’s Alveo card

Author: EIS Release Date: Jul 8, 2020


Mipsology, which specialises in deep learning acceleration software, has had its Zebra neural net accelerating software integrated into the latest build of Xilinx’s Alveo U50 data centre accelerator card.

The accelerator enables the Alveo U50 to compute convolutional neural networks with zero effort.

This is the latest in a series of Zebra-enhanced Xilinx boards that enable inference acceleration for a wide variety of sophisticated AI applications. Others include the Alveo U200 and Alveo U250 boards.

“The level of acceleration that Zebra brings to our Alveo cards puts CPU and GPU accelerators to shame,” says Xilinx vp  Ramine Roane,  “combined with Zebra, Alveo U50 meets the flexibility and performance needs of AI workloads and offers high throughput and low latency performance advantages to any deployment.”

Zebra’s Zero Effort IP creates the first plug-and-play FPGA solution that unleashes the industry’s best performing accelerator, delivering broad application flexibility, a longer life and lower power/cost. It leverages existing skill sets and eliminates the need for FPGA expertise, making Alveo U50 as easy-to-use for deep learning inference acceleration as a CPU or GPU.

“Zebra delivers the highest possible performance and ease-of-use for inference acceleration,” says Mipsology founder and CEO Ludo Larzul (pictured), “with the Alveo U50, Xilinx and Mipsology are providing AI application developers with a card that excels across multiple apps and in every development environment.”

Zebra-powered FPGAs are claimed to be  better suited than GPUs to accelerate neural network inference for both the data center and large industrial AI applications, including robotics, smart cities, image processing/video analytics, healthcare, retail, driver-assist cars, video surveillance and many more, due to their high performance and long life expectancy.

They also extend the lifetime of neural network solutions by doubling FPGA performance every year on the same silicon across FPGA generations.