Lattice Blog

Share:

How FPGAs Will Impact AI in 2024

How FPGAs Will Impact AI in 2024
Posted 02/08/2024 by Bob O’Donnell, President and chief analyst, TECHnalysis Research

Posted in

As we enter the new year, there’s one topic in the tech world that no one seems to be able to get off their mind: AI. Indeed, AI has been talked about so much by so many different companies that people are now starting to notice when companies don’t mention it!

In the semiconductor world, most of the AI attention has been focused on GPUs or dedicated AI accelerator chips like NPUs and TPUs. But it turns out there’s quite a range of components that can directly impact and even run AI workloads. And yes, FPGAs are one of them.

For those who understand the flexible, programmable nature of FPGAs that might not be a big surprise, but the connection between these two may not be obvious to many others. The real trick is having software that allows some of the classic AI development tools, such as convolutional neural networks (CNNs), to be optimized for the types of customizable circuit designs that FPGAs enable.

What also helps is that FPGAs can create multiple parallel compute pipelines—conceptually similar to what GPUs offer—which can be a big benefit for the types of matrix multiplication calculations that are at the core of so many AI algorithms. In addition, the flexible nature of FPGA fabric design can be used to distribute blocks of memory across a chip, allowing for optimized data movement—another critical demand for AI software.

Lattice Semiconductor has been working on software tools that enable these types of capabilities for several years now and has a full suite of offerings. These applications can do everything from adapting existing or newly built AI models into a format that runs most efficiently on their low-power designs, to designing the circuits and chip designs that are most effective for those models. This complete closed-loop system gives companies who want to integrate AI-powered features into their devices and other hardware the ability to do so.

On the AI model side, Lattice’s sensAI solution can take models that have been trained in industry standard AI frameworks such as TensorFlow, Caffe and Keras, and adapt them to run on FPGA resources with cutting edge technologies like model quantization, pruning, and sparsity exploits. The company’s Neural Network Compiler can then analyze the model and make suggestions to run it most efficiently per the types of circuits and on-chip networks. On the hardware side, Lattice’s Propel and Radiant chip design software can be used to create the right combination of circuits to accelerate the running of those models in as power-efficient a manner as possible.

Rather than having to start from scratch when creating these chip designs, companies can leverage critical IP that Lattice has specifically built for those purposes, such as their range of CNN accelerators. These pre-built sets of circuits provide the core foundation for a wide range of applications including people and object detection, object classification, key phrase recognition and much more. Plus, because of the programmable nature of FPGAs, these IP blocks can be edited and added to in order to meet the particular requirements of a given application.

One easy to overlook but very important implication of this combination of pre-built IP blocks is that it enables a wider range of people to work on creating custom FPGAs. This is critical because while many people acknowledge the powerful, flexible nature of FPGAs, they have a reputation for being difficult to program. Building out the specialized RTL code that sits at the heart of FPGA design has been a specialized task that a limited number of people have been able to master, so tools that allow chip designers to connect pre-built elements together in a Lego block-type fashion can make the process much easier.

Similarly, the ability to leverage the traditional AI frameworks such as TensorFlow that many software developers have become familiar with, makes the process of creating AI models that run on FPGAs available to a much broader spectrum of people.

In fact, it’s the combination of these types of simplification efforts that make the future potential of FPGA usage in AI applications so intriguing. As companies across industries rush to investigate how they can best leverage the power of AI into applications from the automotive industry to medical to consumer to industrial and beyond, there’s going to be a significantly wider base of potential customers who will be seeking out semiconductor solutions to enable these capabilities. While in the past, a subsection of these may have known about or considered FPGAs as a potential option, the types of tools that Lattice Semiconductor is making available can make FPGAs a much stronger choice for a much larger percentage of them.

There’s little doubt that 2024 is going to see a huge amount of efforts to integrate AI-powered features and capabilities into a much wider range of applications. The exciting potential is that FPGAs could end up driving a big part of those new efforts.

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.

Share: