The Lattice Semiconductor Advanced CNN Accelerator IP Core is a calculation engine for Deep Neural Network with fixed point weight. It calculates full layers of Neural Network including convolution layer, pooling layer, batch normalization layer, and fully connected layer by executing a sequence of firmware code with weight value, which is generated by Lattice SensAI™ Neural Network Compiler. The engine is optimized for convolutional neural network, so it can be used for vision-based application such as classification or object detection and tracking. The IP Core does not require an extra processor; it can perform all required calculations by itself.
Higher Throughput – 64 bit data path engine for Avant, and 32bit data path engine for Avant and CPNX FPGAs.
Faster Run Time – Vector ALU for enhanced pixelwise operations, and accelerated pre/post ML image processing algorithms.
Improved Performance – Supports 1 to 4 convolution engines.