The Lattice Semiconductor CNN Plus Accelerator IP Core is a calculation engine for Deep Neural Network with fixed point weight. It calculates full layers of Neural Network including convolution layer, pooling layer, batch normalization layer, and full connect layer by executing sequence code with weight value, which is generated by Lattice sensAI™ Neural Network Compiler. The engine is optimized for convolutional neural network, so it can be used for vision-based application such as classification or object detection and tracking. The IP Core does not require an extra processor; it can perform all required calculations by itself.
The CNN Plus Accelerator IP Core offers three types of implementations: the compact CNN type, which is suitable for small FPGA devices due to its low utilization, the optimized CNN type, which can perform four convolution calculations in parallel, making it suitable for high-speed applications, and the extended CNN type, which offers the same features as optimized CNN plus additional support for max pooling/unpooling with max argument.
Customized Convolutional Neural Network (CNN) IP – CNN Plus IP is a flexible accelerator IP that simplifies implementation of Ultra-Low power AI by leveraging the parallel processing capabilities, distributed memory and DSP resources of Lattice FPGAs.
Configurable Modes of Use – Three modes are available: COMPACT (low perf, smallest footprint), OPTIMIZED (higher perf in resource optimized footprint) and HIGH performance mode(highest perf with biggest footprint).
Easy to Implement – Models trained using common machine learning frameworks such as TensorFlow can be compiled using the Lattice Neural Network Complier Tool and implemented on HW using the CNN Plus Accelerator IP.