Lattice Blog

Share:

Meeting Demand for More Intelligence at the Edge

Meeting Demand for More Intelligence at the Edge
Posted 08/21/2018 by Deepak Boppana

Posted in

Over recent decades system design has evolved from one processing topology to another, from centralized to distributed architectures and back again in a constant search for the ideal solution. As computational requirements have skyrocketed, the industry has migrated to a more centralized approach built around cloud-based computing. Today businesses prefer to perform high level computation and analysis in the cloud where they can take advantage of its virtually unlimited computational and storage resources, high reliability and low cost.

As companies adopt machine learning techniques and employ higher levels of artificial intelligence (AI), it seems likely the computational resources in the cloud will play an increasingly pivotal role in most organizations plans. But the cloud is not the ideal solution for all applications. Today’s focus on machine learning for AI typically occurs in two phases. First, systems are trained to learn a new capability by collecting and analyzing large amounts of existing data. For example, a system learns how to recognize a gesture by viewing thousands of images. This phase can be highly compute-intensive. Neural networks in machine learning for applications such as image recognition can require Terabytes of data and exaflops of computational power. Accordingly, these tasks are typically performed in the data center.

The second phase of machine learning is called inferencing. Here machines expand their understanding of a task or definition of a gesture by applying the system’s capabilities to new data through its work in the field. In a typical example a facial detection function increases its intelligence to recognize a human face by analyzing examples in the field and incorporating the lessons learned.

In many cases this compute-intensive analysis would also be performed in the data center. But a number of factors are conspiring to redefine computational, bandwidth and power demands at the network edge. The demand for “always-on” sensing continues to grow. Today cameras are used 24/7 to watch for anomalies in manufacturing lines, to monitor speed and lane compliance in an automobile, or to identify specific gestures or facial characteristics in a mobile application. Transmitting this “always-on” data back to the data center poses new challenges from a security standpoint. Designers are reluctant to send captured images to the cloud. They fear it increases privacy risk and extends system latency. Instead, they want to perform those tasks locally which drives up local computational requirements and, in some cases, power consumption as well. That, in turn, poses a major challenge for battery-powered mobile products.

How can designers bring more computational power to the network edge without driving up power requirements, increasing bandwidth needs, or exposing users to privacy risk? One way to solve this dilemma is by tapping the inherent parallel processing capabilities of FPGAs, particularly a new generation of low density FPGAs optimized for low power operation. These devices combine extensive embedded DSP resources and a highly parallel architecture with a competitive advantage in terms of power, footprint and cost. Devices like Lattice’s iCE40 UltraPlus can accelerate neural networks in the mW range while Lattice’s ECP5 FPGAs can operate at under 1 W. They also give engineers high levels of design flexibility by allowing them to tradeoff precision for greater processing speed or lower power. The DSP blocks in Lattice’s ECP5 FPGAs, for example, can compute fixed point math at less power/MHz than GPUs using floating-point math. Moreover, these devices are available in highly compact packages which helps designers meet the stringent footprint requirements commonly found in consumer and industrial applications.

Another key factor that will likely play a crucial role in the success of edge products will be the availability of a developmental ecosystem capable of accelerating prototype development of AI-based, sub-1W solutions. To address this need we recently announced Lattice sensAI, the first comprehensive technology stack for inferencing that brings together the modular hardware kits, neural network IP cores, software tools, reference designs and custom design services designers require to bring ultra-low power AI applications to market. By combining flexible FPGA hardware and software, sensAI accelerates the integration of on-device sensor data processing and analytics in edge devices. Moreover, designers can use sensAI demos and common AI use cases like object detection and key-phrase detection to build custom solutions in short development cycles.

Clearly, rising demand for AI-based solutions on the edge presents a number of new challenges. How can developers bring higher levels of computational power to these products without risking privacy, or running up against bandwidth and power limitations? And how can designers rapidly bring to market new AI-based solutions without the hardware kits, IP cores, software tools, reference designs and custom design services they need? Expect Lattice’s low power, small footprint FPGAs and its new sensAI technology stack to play a key role in the rapid evolution of AI on the edge.

Share: