Lattice Blog


How Applications Determine AI Development

How Applications Determine AI Development
Posted 05/09/2022 by Hussein Osman

Posted in

The importance of AI and ML, the rise of edge computing, and the need for flexible programming in AI.

Artificial Intelligence (AI) is one of the biggest buzzwords in the technology industry today and is often thought of in broad terms to describe connected “smart” technology. However, when you think about all the different features enabled by AI and the outcomes AI solutions are designed to deliver, it becomes clear that the AI landscape and AI development is incredibly complex. The fact is that the specific intended use of AI technology both differentiates it and determines how it is developed, the standards and requirements built into it, and the testing it must undergo. So, rather than being a “blanket” technology, AI is essential a network of customized solutions to complex technological challenges.

I recently had the opportunity to sit down and discuss the process of developing an AI solution for a specific application during a roundtable discussion with Lattice's VP of Segment Marketing Matt Dobrodziej and President and Chief Analyst of TECHnalysis Research, Bob O'Donnell. In this blog I’ll recap the highlights from that discussion and offer more perspective on how specific applications determine AI model development, the challenges around creating AI models, and how to make it easier to work with some of the leading technologies in the marketplace.

AI Enters the Mainstream and the Rise of Edge Processing

Once unimaginable technology is now being brought into the consumer mainstream through smartphones, PCs, cars, and various other connected devices — as well as in the commercial setting from factory floors to healthcare equipment. And, due to a seemingly unquenchable thirst for AI and Machine learning (ML), we’re seeing innovation in these technologies frequently outpacing their own implementation.

Both the creative challenge and the opportunity for designers is to turn our current devices into something smarter and more useful by applying complex concepts like pattern matching or segmentation. In the past, enormous computing power and capabilities in huge datacenters were required to run an AI inferencing application. Now, the same application can be run locally and with much less power at similar accuracy to what was once only possibly on a server.

This shift is driven largely by user security, privacy, latency requirements, and new hardware choices capable of running AI models with similar efficiency to server-based cloud implementations. More and more, AI and data crunching occurs locally on devices themselves which has massive implications for system development.

Rapidly Emerging Edge Computing Trend

An AI Problem and Solution Framework

Developers looking to break into the AI and ML space must identify and adopt the right foundational technologies as they relate to their specific AI application and design goals. This process often follows a three-step process:

1. Ideation — Deciding on the application and requirements

Building an AI workload requires an understanding of the application requirements, including performance. How much processing power is required? What power budget is available? What is the size of the battery and is it efficient? Remember to take a step back to look at the system being built. What sensors are available to extract data to make the application smart?

2. Design — Building the model or taking a predesigned model and training it

After the parameters are defined, the next task is deciding on the right model. The model essentially decides what kind of application a designer is looking for and dictates the hardware needed to support the desired capabilities. Depending on the problem being solved, there are multiple choices of models to pick from – including a variety available in the open-source community – for example audio processing models are different than vision models.

3. Testing — Making sure what was designed works as expected

Going from a working concept to a production solution requires extensive testing to ensure it functions as expected across different environmental and use-specific variables. For example, an application that tracks user attention would need to work for all users, under different usage scenarios and environmental conditions. Testing can be done by different means; model validation uses, standard tools such as Tensor Board to expose the model to many representative samples to initially understand how well the model works. Additional testing is done through regression testing, preferably on the target hardware, and finally in system user experience (UX) testing to catch corner cases and model weaknesses in real life use scenarios.

For AI on the Edge, FPGAs Present Unique Value

The opportunities that field programmable gate arrays (FPGAs) present for AI on the edge are many and varied. These flexible integrated circuits enable the development of custom workloads and final system designs that could alleviate AI development challenges.

Designers typically have to make difficult decisions upfront around leveraging hardware. These choices can leave them siloed into a particular design later in the process, limiting what they're able to do solely based on the capabilities of the foundational components. FPGAs are inherently flexible, making them the ideal choice for edge computing because they can quickly adapt via software updates – even after deployed in an end-use system – if functionality changes are needed.

AI is constantly changing, with improvements and new innovations often outpacing system design. In the world of AI and ML, adaptable, re-programmable hardware solutions are the key to keeping up with the speed of innovation. Because FPGAs are programmable by nature, they can cut down on time-to-market and aren’t limited to a fixed function that can shorten a system or application’s lifespan. Equally important is their parallel processing capability, which comes in handy for high-performance AI applications to deliver higher performance while consuming less energy.

Why FPGA for Edge AI

The Lattice Nexus™ FPGA platform delivers power efficiency, performance, small size, and security features available at both configuration and run-time – all of which can help differentiate AI and ML solutions. Lattice FPGAs are classified based on the type of application the FPGA is designed to support:

  • General Purpose – designed for a broad range of application needs
  • Embedded Vision – designed for video bridging and processing
  • Ultra-Low Power – designed for power- and space-constrained applications
  • System Control and Security – designed for platform management and security

In addition to FPGAs for edge AI solutions, the Lattice sensAI™ solution stack is designed to accelerate the integration of flexible, low power inferencing at the edge by providing designers with everything they need to evaluate, develop, and deploy FPGA-based AI and ML solutions. Lattice’s integrated solutions can help developers quickly and easily deploy on-device AI for a wide range of applications.

If you’d like to hear more about this topic, you can check out the replay of the roundtable discussion mentioned previously here. Also, be on the lookout for more of Lattice’s roundtable discussions about the latest semiconductor and technology trends. If you have questions about Lattice solutions, reach out to us today!