Lattice Blog

Our system is going under maintenance starting June 21, 2024 at 8:00 PM Pacific and ending June 23, 2024 at 2:00 PM Pacific. During this window, the website may not be reachable. For immediate assistance, please contact


System Architecture Options for On-Device AI

Architecting Low Power AI
Posted 11/14/2018 by Deepak Boppana

Posted in

How often is low power the determining factor for success? Certainly when designing solutions for AI inferencing in always-on edge devices, the power consumption must be measurable in milliwatts. Think about it: AI at the edge solves real world problems, and is – or very soon will be – everywhere. Especially with the uptake of smart home IoT products such as door entry systems that are required to be in ‘Intelligent Standby Mode’, only becoming fully ‘awake’ when face detection software has identified that it is a person not a cat at the front door. With low power inferencing at the edge – on the device itself - data is not being continually uploaded and analysed unnecessarily.

Many such edge devices are battery-operated or have thermal constraints, leading to stringent power limitations. Additionally, the inferencing solution has to be flexible enough to adapt to evolving deep learning algorithms and architectures, including on-device training. Miniature, low power, FPGAs are proving highly-suitable for such applications, given that they also offer a winning combination of flexibility - enabling legacy interfaces to be supported and therefore low cost display, sensor and camera to be used - together with customizable, in some cases user-programmable, levels of performance and accuracy to be achieved. FPGAs also possess inherent parallel processing capabilities, which is useful for implementing machine learning inferencing.

Given the unique mix of requirements for on-device edge AI, developers must architect their systems thoughtfully, both at the system level and the chip level. FPGAs can implement AI as standalone solutions or in conjunction with other components. There are three main architectural choices:

Stand-alone integrated FPGA

This is the most highly-integrated approach suitable for space constrained applications such as smart doorbells for example. FPGAs in package sizes from 5.5mm² to 100mm² can be used depending on task complexity and power consumption budgets. Integration also improves security, which is becoming an ever-more important consideration.

FPGA as Activity Gate to ASIC/ASSP

In this configuration, the FPGA is used for example in video surveillance cameras for initial detection – of, for example a key phrase or object - only waking up a high performance ASIC/ASSP for further analysis if required. System power consumption is reduced and unnecessary data – for example video of nothing happening, or false-trigger events - will be not be wastefully-uploaded to the cloud.

FPGA as co-processor to MCU

Using a small, low power FPGA as a co-processor to low-end MCU enables low-cost flexible system control and the ability to interface with on-board legacy devices, including sensors, so AI can be easily added, even as a design retrofit. Scalable performance/power trade-off decisions are permitted using neural network acceleration.

ECP5 and iCE40 UltraPlus FPGAs from Lattice Semiconductor plus the company’s sensAI soft IP, tools, development boards and reference designs provide building blocks that enable designers to quickly and simply develop cost-effective edge AI products that address challenges in the home, workspace, factory, transportation and every other aspect of our 21st century lives.

Learn more