Take advantage of the power of FPGA’s parallel processing to implement CNNs. This IP enables you to implement your own custom network or use many of the commonly used networks published by others.
Our IP provides the flexibility to adjust the number of acceleration engines. By adjusting the numbers of engines and allocated memory, users can trade speed of operation with FPGA’s capacity to obtain the best match for their application.
The CNN Accelerator IP is paired with the Lattice Neural Network Complier Tool. The compiler takes the networks developed common machine learning frameworks, analyzes for resource usage, simulates for performance and functionality, and then compile for the CNN Accelerator IP.