CNN Plus Accelerator IP

AI Acceleration Using Low Power FPGAs

Customized convolutional neural network (CNN) IP – CNN Plus IP is a flexible accelerator IP that simplifies implementation of Ultra-Low power AI by leveraging the parallel processing capabilities, distributed memory and DSP resources of Lattice FPGAs.

Configurable modes of use - Two implementations are available, compact or high performance. Compact mode is the low power processing mode taking advantage of the FPGA local memory. On the other hand, the high performance mode is optimized for use with larger network implementations.

Easy to implement – Models trained using common machine learning frameworks such as TensorFlow can be compiled using the Lattice Neural Network Complier Tool and implemented on HW using the CNN Plus Accelerator IP.


  • Performs a series of calculations per command sequence generated by the Lattice NN Compiler tool
  • Configurable resource usage for tradeoff between power and performance
  • Support common network structures such as VGG, Mobilenet, Resent and SDD
  • Takes advantage of internal and external memory resource and manages access to optimize performance
  • Configurable bit width of neural network weights (16 bit, 8 bit, 1 bit)

Jump to

Block Diagram

CNN Plus IP Compact Mode Block Diagram

CNN Plus IP High Performance Mode Block Diagram

Performance and Size

CrossLink-NX Performance and Resource Utilization
Configuration3 clk_i, aclk_i Fmax (MHz)2 Slice Registers LUTs LRAMs EBRs4 Logical DSP
Default 116.401, 118.652 2855 3673 2 12 13, 1 13, 13
Scatch Pad Memory Size=4K, Others=Default 119.962, 118.259 2890 3689 2 15 13, 1 13, 13
Scatch Pad Memory Size=8K, Others=Default 121.832, 116.009 2898 3685 2 19 13, 1 13, 13
Scatch Pad Memory Size=16K, Others=Default 118.751, 113.598 2880 3703 2 27 13, 1 13, 13
Memory Type=SINGLE_LRAM, Others=Default 115.062, 113.404 2869 3631 1 12 13, 1 13, 13
Machine Leaning Type=OPTIMIZED_CNN 123.609, 113.662 5687 7693 2 17 48, 4 48, 48
Machine Leaning Type=OPTIMIZED_CNN, Scatch Pad Memory Size=2K, Others=Default 117.564, 109.158 5695 7717 2 21 48, 4 48, 48
Machine Leaning Type=OPTIMIZED_CNN, Scatch Pad Memory Size=4K, Others=Default 124.239, 118.092 5709 7711 2 29 48, 4 48, 48
Machine Leaning Type=OPTIMIZED_CNN, Scatch Pad Memory Size=8K, Others=Default 120.963, 112.133 5707 7706 2 45 48, 4 48, 48
Machine Leaning Type=OPTIMIZED_CNN, Scatch Pad Memory Size=8K, Maximum Burst Length=256, Others=Default 123.289, 120.875 5709 7722 2 45 48, 4 48, 48

1. Performance may vary when using a different software version or targeting a different device density or speed grade.
2. Fmax is generated when the FPGA design only contains the CNN Plus Accelerator IP Core. These values may be reduced when user logic is added to the FPGA design.
3. The K value in “Scatch Pad Memory Size=*K” is equivalent to 1024 entries x 2 bytes. For example, 4K is equal to 8 kB of scratch pad memory.
4. The OPTIMIZED_CNN implementation has a lot more EBRs because it duplicates the EBRs in Convolution scratch strorage to enable parallel processing. Also, some duplicated submodules have their own EBRs: CONV_EU (1 EBR per unit) and POOL (1 EBR shared by 2 units).

Ordering Information

Family Part Number Description
Certus-NX CNNPLUS-ACCEL-CTNX-U Single Design License
Certus-NX CNNPLUS-ACCEL-CTNX-UT Multi-Site License
CrossLink-NX CNNPLUS-ACCEL-CNX-U Single Design License
CrossLink-NX CNNPLUS-ACCEL-CNX-UT Multi-Site License


Quick Reference
CNN Plus Accelerator IP User Guide
FPGA-IPUG-02115 1.2 5/27/2021 PDF 1.1 MB

*By clicking on the "Notify Me of Changes" button, you agree to receive notifications on changes to the document(s) you selected.

Like most websites, we use cookies and similar technologies to enhance your user experience. We also allow third parties to place cookies on our website. By continuing to use this website you consent to the use of cookies as described in our Cookie Policy.