Lattice Blog

Share:

Use FPGAs to Optimize Always-on, Inferencing Applications

Use FPGAs to Optimize Always-on, Inferencing Applications
Posted 10/08/2018 by Deepak Bopanna

Posted in

A new generation of AI-based edge computing applications demand a wide range of performance requirements. But how can developers build edge solutions that consume little power and occupy minimal footprint at low cost without compromising performance?

To achieve that goal designers will require silicon that will allow them to take advantage of rapidly changing network architectures and algorithms. They will also need solutions that will allow them to use a wide range of I/o interfaces. Finally, they need solutions that, through custom quantization, will allow them to trade off accuracy for power consumption.

FPGAs can play a key role in this process. Machine learning typically requires two types of computing workloads. In the first training phase systems learn a new capability from existing data. Systems learn to identify a bird, for example, by analyzing tens of thousands of images. Since this early training phase is highly compute intensive, it is traditionally performed using high performance hardware such as GPUs in the data center.

In the second phase of machine learning, called inferencing, AI systems extend their knowledge by identifying patterns and applying trained models to new, recently-collected data. In this way the system learns as it works and increases its intelligence over time. But given latency requirements, escalating privacy issues and communication bandwidth limitations, designers can’t afford to perform inferencing in the cloud in many cases. Instead they must perform inferencing close to the source of their data on the edge.

At the network edge, however, deep learning techniques that use floating-point computation in the data center are impractical. Designers must develop more efficient solutions that not only meet accuracy targets, but also comply with the rigorous power, cost and footprint requirements typically found in consumer and industrial IoT markets. At the edge devices must perform inferencing using arithmetic that employs as few bits as possible. One way to achieve that is to switch from floating point to fixed-point math. By changing the way training is performed to compensate for the quantization of floating-point to fixed-point integers, designers can develop solutions that train faster with higher accuracy.

Where can developers find the platform needed to perform inferencing on the network edge? One solution lies in the parallel processing capability built into FPGAs. Since the hardware structure in a FPGA is not fixed, the functions logic cells perform and the interconnection between them is determined by the developer. That allows the developer to program the FPGA to perform multiple processes simultaneously in parallel instead of in a sequential manner.

But any solution targeted at the edge must also be highly power efficient. Historically FPGAs, particularly higher density FPGAs, have consumed relatively high levels of power. But a new generation of low density, low power FPGAs is changing that perception.

Both Lattice’s iCE40 UltraPlus and ECP5 FPGA families are designed to meet this evolving requirement. Designers building power efficient AI-based solutions can use the ECP5 family to accelerate neural networks down to 1W and use the iCE40 UltraPlus family to accelerate neural networks with power consumption down in the mW range.

To help developers get to market quicker, Lattice also recently unveiled the industry’s first complete technology stack for low power, broad market IoT solutions. Called Lattice sensAI, this comprehensive ecosystem combines hardware kits, neural network IP cores, software tools, reference designs and custom design services. Moreover, sensAI demos and common AI use cases offer a blueprint for the development of object detection, key-phrase detection, and other popular always-on, AI solutions.

Obviously, today’s developers still face a number of challenges as they attempt to deliver competitive AI-based solutions for smart home, smart city, smart car and smart factory applications. But the key components needed to simplify and accelerate the development of broad market AI solutions are clearly falling into place.

Share: