Implementing Low Cost Intelligence in Smart Retail Applications
Posted 04/10/2019 by Dirk Seidel
We’ve published a couple blog posts exploring how support for AI-powered imaging systems in embedded devices operating at the network edge can benefit specific applications in smart factories and smart homes. But embedded vision can also benefit the retail customer experience.
By now we’ve all seen self-service point-of-sale (PoS) systems in grocery stores and appreciate the time they save shoppers looking to quickly scan and pay for a few items without employee assistance. But I’m guessing the self-checkout experience has frustrated many of us, too: the PLU sticker for your apples is missing or a UPC code hasn’t been added to the store’s database yet. Interruptions like these often require employee assistance to get resolved, which defeats the purpose of having a self-service PoS terminal in the first place.
One way to fix the missing PLU sticker and phantom bar code problems is to use image recognition backed by AI/machine learning. Locally-performed inference can teach the PoS to recognize different types of fruit or a brand logo and use that data to properly identify and price the item in a future transaction if it’s missing a PLU or UPC. It could also recognize multiple instances of the same item (a bag of apples, for example), automatically count each item and then ring up the total price. The end result is a smoother checkout experience for the customer, and less employee time spent dealing with PoS issues.
That kind of inference-based machine learning application is well served by the parallel processing capabilities of FPGAs to accelerate neural network performance. FPGA-based machine learning also eliminates or greatly reduces the PoS system’s need to connect to the cloud, which cuts down on latency and connectivity costs.
And it’s never been easier, faster or cheaper to implement object detection in embedded devices like a PoS system. Lattice has streamlined the path for system designers to get started with the Embedded Vision Development Kit hardware and the Lattice sensAI stack, with the world’s smallest deep neural network engine. To learn more, check out our latest sensAI whitepaper and visit the Embedded Vision Development Kit product page.