Lattice Blog

Use FPGAs to Optimize Always-on, Inferencing Applications

Use FPGAs to Optimize Always-on, Inferencing Applications

Posted 10/08/2018 by Deepak Bopanna

A new generation of AI-based edge computing applications demand a wide range of performance requirements. But how can developers build edge solutions that consume little power and occupy minimal footprint at low cost without compromising performance?

Read more...
Meeting Demand for More Intelligence at the Edge

Meeting Demand for More Intelligence at the Edge

Posted 08/21/2018 by Deepak Boppana

Over recent decades system design has evolved from one processing topology to another, from centralized to distributed architectures and back again in a constant search for the ideal solution.

Read more...
AI / Machine Learning

Inferencing Technology Stack Shrinks Time-to-Market for Edge Applications

Posted 06/12/2018 by Deepak Boppana

New Technology Promises to Accelerate Deployment of Machine Learning Inferencing Across Mass Market, Low-power IoT Applications

Read more...
Like most websites, we use cookies and similar technologies to enhance your user experience. We also allow third parties to place cookies on our website. By continuing to use this website you consent to the use of cookies as described in our Cookie Policy.