Lattice Blog

Share:

Inferencing Technology Stack Shrinks Time-to-Market for Edge Applications

AI / Machine Learning
Posted 06/12/2018 by Deepak Boppana

Posted in

A new trend in system design portends huge opportunities and challenges for designers of edge solutions. While a growing number of companies and organizations today are turning to the cloud to reduce costs and maximize efficiencies, lower latency requirements, escalating privacy concerns and communication network limitations are driving demand for increased intelligence in the “Things” of IoT at the network edge. These new applications will require Machine Learning-based computing solutions located closer to the source of IoT sensor data than the cloud, as well as on-device computational resources.

How large is this opportunity? Analysts at Gartner predict that by 2022 close to half of all enterprise-created data will be processed outside a traditional data center or cloud. That represents a huge jump from the 10 percent of data processed outside organizations today. Where will that data be generated? More than likely it will come from a wide range of rapidly growing, broad market edge applications in mobile devices, smart homes, smart factories, smart cities and smart car products. Analysts at IHS Markit back up that contention by predicting the deployment of over 40B IoT devices between 2018 and 2025. Along the way they expect the convergence of emerging technologies like IoT, AI-based edge computing, and cloud analytics to disrupt virtually every industry vertical market.

Expect FPGAs to play a major role in the processing of this avalanche of data. Machine learning typically requires two types of computing workloads. Systems in training learn a new capability by collecting and analyzing large amounts of existing data. A facial detection function, for example, learns to detect a human face by analyzing tens of thousands of images. This phase is by its nature highly compute-intensive and, therefore, typically conducted in the data center using high performance hardware. The second phase of machine learning called inferencing applies the system’s capabilities to new data by identifying patterns and performing tasks. For example, a facial detection function refines its capabilities and increases its intelligence over time by improving its ability to detect a human face through its work in the field. But in some cases, designers cannot afford to perform inferencing in the data centers because of latency, privacy and cost barriers. Instead they must perform those computational tasks close to the edge.

One way designers can quickly bring more computational resources to the network edge without re-tuning existing devices is to use the parallel processing capabilities inherent in FPGAs to accelerate neural network performance. Moreover, by employing lower density FPGAs optimized for low power operation and available in compact packages, designers can meet the stringent power and footprint limitations associated with fast-growing consumer and industrial applications. For instance, designers can use Lattice’s ECP5 FPGA family to accelerate neural networks operating at under 1 W, while they can use Lattice’s iCE40 UltraPlus FPGAs to accelerate neural networks in the mW range.

But it will take more than that to get millions of edge solutions to market. Designers need not only silicon that gives them maximum design flexibility, but also allows them to take advantage of rapidly evolving neural network architectures and algorithms. They need the hardware and software tools that allows them to build AI devices that deliver high performance without violating power, footprint and cost constraints. Just as important, they need the reference designs, demos and design services necessary to build custom solutions in a rapidly shrinking time-to-market window.

To address this growing need and help accelerate and simplify the development of AI solutions in edge devices, Lattice released sensAI, the first full-featured FPGA-based machine learning inferencing technology stack that combines hardware kits, neural network IP cores, software tools, reference designs and custom design services. With this ecosystem designers will be able to build solutions optimized for low power operation (1 mW to 1W), small package size (5.5 mm2 to 100 mm2), and high-volume pricing (approximately $1 to $10 USD), but with the design flexibility of FPGAs to support evolving algorithms, interfaces and tailored performance.

Clearly a revolution has begun in the development of AI-based edge devices. Over the next few years we can expect to see millions of new devices hit the market designed to bring higher levels of intelligence to the edge. As the first comprehensive development ecosystem of its type, look for Lattice’s sensAI to offer designers a fast path to AI-based broad market applications.

Share: