Detect the Presence of a Human Face in Under a 1 mW – This demo uses artificial intelligence (AI) to implement a human detection algorithm. FPGAs have parallel data processing ability, making them more power efficient at such tasks compared to a microprocessor.
Always-on, Local Intelligence Improves Security – Bringing AI to the network edge is challenging, but it also offers tremendous opportunity. Designing AI into an iCE40 UltraPlus FPGA instead of cloud-based resources can dramatically cut power consumption while accelerating response time. At the same time, keeping processing local improves security. Designers also gain always-on intelligence, even when the network is turned off to save power.
Multi-Engine BNN in a 2.15 mm x 2.55 mm FPGA – The Lattice inference engine with BNN architecture is able to fit into two package options in our iCE40 UltraPlus FPGA. A 30-ball CSP package with 0.4 mm ball pitch created the smallest neural network within an FPGA, 2.15 mm x 2.55 mm. A 48-pin QFN package with 0.5 mm pin pitch enables lower cost PCB designs, 7.0 mm x 7.0 mm.