Human to Machine Interfacing Demonstration

Embedded Solution to Improve How Users Interact with Devices

Related Applications

Human to machine interfacing demo is a collection of NN models accelerated on FPGA to enable user detection, position and pose of the detected persons’ bodies, faces, distance to the camera and identification status.

The experience is fully running on FPGA, video and meta data is shipped to a host PC via USB and UART. The sensAI Edge AI Vision Engine tool can be used to visualize the Person Detection pipeline output overlayed over the camera image output from the USB. HW supported is the CrossLink-NX-33 VVML board with a USB to TTL adapter for meta data transmission.

Features

  • Based on Mobilenet v1 network
  • Accelerated, low-power object detection demo
  • Processing at up to 30 FPS and 160x160 resolution
  • Total application power consumption of CrossLink-NX FPGA is between 10mW to 250mW

Block Diagram

Documentation

*By clicking on the "Notify Me of Changes" button, you agree to receive notifications on changes to the document(s) you selected.