An FPGA Primer
Posted 01/10/2020 by Bob O’Donnell
One of the truisms of modern business is that as industries mature, they tend to specialize. Customer demands typically become more sophisticated over time and that leads to products optimized to meet those specific needs.
That’s certainly the case in the tech industry overall and the semiconductor business in particular. There are now a significantly wider range and more complex set of tech-powered products than we’ve ever seen, and it’s taken an increasingly diverse set of semiconductor-based computing solutions to enable their creation.
What’s particularly interesting about advances in semiconductors over the last several years is that there has been an explosion not just in the number of chips, but even in the basic architectures powering this silicon. The industry is moving well beyond the basic CPU and microcontroller designs that have served as the brains of most technology products to date and is seeing growing roles for accelerator chips of many types, including GPUs, APUs, TPUs, and another somewhat lesser known type called FPGAs (field programmable gate arrays).
In many instances, these accelerators work alongside a chip like a CPU to speed up certain tasks that are critical to an application, such as image recognition for a computer vision application. The concept of multiple chip architectures working together is commonly called heterogeneous computing and it’s one of the hottest and most important developments to occur in the tech industry for some time.
Despite the relative “newness” of heterogeneous computing, however, many of the complimentary architectures behind these accelerator chips have been around for some time. The first FPGAs, for example, were designed in the mid-1980s, and they’ve been used as a key component of many different types of tech products ever since. The concept behind the original FPGAs was to create a more flexible alternative to chips called ASICs (application specific integrated circuits) which, as their name implies, are specialized pieces of silicon intended for specific products. ASICs are designed to perform certain types of functions very quickly—even more so than a general-purpose computing engine like a CPU—so they can be a great option in certain applications. Unfortunately, ASICs can be very hard (and expensive) to design, so they’re not always the best real-world choice. Plus, crucially, once an ASIC is designed and produced, its functionality cannot be changed without essentially designing and building a whole new chip.
FPGAs, on the other hand, are inherently flexible chips that, as their names suggests, can be programmed or re-programmed in the “field”—that is, after the chip is built and functioning within a device. This “updateability” is an incredibly useful capability because it allows companies to add new capabilities (or fix flaws in existing functions) to the devices which include FPGAs. So, for example, as a machine learning-based algorithm “learns” more and evolves over time, an FPGA that was first programmed to run that algorithm can be updated to now run the newer version of that algorithm. Not surprisingly, the flexibility that FPGAs offer typically comes at slightly higher price than an equivalent ASIC, but in many applications the value they provide is well worth it.
Practically speaking, the re-programmable nature also means that companies can bring products based on FPGAs to market more quickly because they can update their functionality after the products have been built, instead of having to complete all the functional design before they are built.
Housed within the confines of an FPGA are series of logic blocks and high-speed interconnects that can be used to control the inner workings of the chip. Conceptually, it’s not much different than getting a big set of Lego blocks that can be rearranged and reconfigured exactly as desired. A key benefit of modern FPGA designs is that functions can be run in parallel, allowing significant acceleration of certain types of workloads. This is also a key distinction between FPGAs and microcontrollers, many of which can only run serially.
Like the larger trend of specialization, FPGAs themselves have also started to branch off into different areas with some designs targeted towards high-power data center applications and others toward very low-power designs. In either case, they provide the kind of customizable computing power that’s now become an essential part of our modern devices and services.
Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.