Convert voice commands into system actions – Use popular training tools to train a neural network key phrase command along with the Lattice Neural Network Compiler to bridge the training output into the Lattice inference engine. Finally, Integrate the inference engine, made up of iCE40 UltraPlus-5K FPGA with a BNN Accelerator IP core, into your design for added system intelligence.
Always-on listening in under 1 mW – Connect a digital microphone directly to the Lattice inference engine and enable always-on listening with key phrase detection, as well as audio buffering with 128 Kbytes of integrated SRAM.
Multi-engine BNN in a 2.15 mm x 2.55 mm FPGA – The Lattice inference engine with BNN architecture is able to fit into two package options in our iCE40 UltraPlus FPGA. A 30-ball CSP package with 0.4 mm ball pitch created the smallest neural network within an FPGA, 2.15 mm x 2.55 mm. A 48-pin QFN package with 0.5 mm pin pitch enables lower cost PCB designs, 7.0 mm x 7.0 mm.