Lattice Blog

Share:

Sensor Fusion at the Edge: Two Perspectives

Sensor Fusion at the Edge: Two Perspectives
Posted 05/06/2021 by Mark Hoopes and JP Singh

Posted in

Sensor fusion is a popular application for Lattice FPGAs, and that particular application is often a topic of discussion among marketing team members here at Lattice. Because of their low power consumption and small size, Lattice FPGAs are often used for sensor fusion in Edge applications. But a recent conversation between the two of us illustrated that what defines an “Edge application” may differ between end markets, and a blog post examining those differences could be informative.

An industrial engineer working in a factory may understand “the Edge” to refer to the boundary between the physical world, in the form of a machine on a production line, and the non-physical world, in the form of the factory’s intranet and/or the internet. In this case, what the engineer considers to be an Edge device may comprise sensors, actuators, and a control system used to monitor the state of a machine and perform any corrective actions as necessary. Oftentimes, such a device may feature some sort of cognitive (reasoning, thinking) capability in the form of artificial intelligence (AI).

Consider the case of machine maintenance, for example, for which there are various strategies that may be employed, including reactive, pre-emptive, and predictive. In the case of reactive maintenance, the idea is to run the machine until it fails and then fix it. The attraction to this strategy is that you can forget about the machine most of the time. The problem is that when the machine does fail, it may disrupt the entire production line, making a potentially minor issue mushroom into a major problem. In the case of pre-emptive maintenance, the machine is serviced, and selected parts are replaced, on a time-based schedule or a number-of-hours-running basis. The advantage of this strategy is that the machine will typically run for a long, long time without problems. The problem is the cost associated with replacing parts before their time and the resources required to perform potentially unnecessary maintenance tasks. When it comes to predictive maintenance, the idea is for the Edge device to use its AI to monitor the machine’s health, looking for anomalies or trends, and then guide the maintenance team to address potential problems before they evolve into real issues.

By comparison, an automotive designer may well regard an entire vehicle as being on the Edge. In this case, the vehicle may boast various advanced driver-assistance systems (ADAS), along with autonomous capabilities and functions, that rely on the gathering of data from myriad sensors, including image (camera), lidar, radar, ultrasonic, and far infrared (FIR) devices. Meanwhile, the task of the various automotive AI systems is to help the driver to drive without hitting anyone or anything, and to protect the vehicle and its human cargo from accidents and mishaps.

A typical complement of automotive sensors used for ADAS and autonomous applications
A typical complement of automotive sensors used for ADAS and autonomous applications.

In the same way that the term “Edge” can mean different things to different people, so too can the term “sensor fusion.” In its most generic sense, sensor fusion is the process of combining sensory data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually.

One form of sensor fusion involves combining the data from sensors like accelerometers, gyroscopes, and magnetometers to benefit from their strengths and to correct for their weaknesses. In the case of these types of sensors, we might talk about three axes: The X-axis, the Y-axis, and the Z-axis.

The X, Y, and Z axes
The X, Y, and Z axes.

There are two types of motion possible with regard to each of these axes: linear and angular (rotational). In the case of linear motion, movement is possible from side to side along the X-axis, up and down along the Y axis, and forward and backward along the Z axis. These may be regarded as three degrees of freedom (3DOF). By comparison, in the case of angular motion, it’s possible to rotate around one or more of the X, Y, and Z axes, thereby providing another 3DOF. Based on this, we might say that any classical physical system can have a maximum of 6DOF, because there are only six different ways the system can move in three-dimensional (3D) space.

In addition to motion, which we can determine by measuring the differences between the values of all possible DOFs between two times (let’s call them tn and tn+1) and then repeating this process over and over again, we may also be interested in orientation (that is, the physical position or direction of the object in question relative to something else), which we can determine by knowing the values of all possible DOFs at some time tn.

A 3-axis accelerometer measures linear motion along the X, Y, and Z axes. By comparison, a 3-axis gyroscope measures angular rotation around the X, Y, and Z axes. Also, a 3-axis magnetometer can sense where the strongest magnetic force is coming from in the context of the X, Y, and Z axes. Magnetometers are usually used to detect the Earth’s magnetic field, but they can also be used to measure human-made magnetic fields if required.

Each type of sensor has its own strengths and weaknesses. For example, accelerometers are affected by vibration and unaffected by magnetic fields, while magnetometers are unaffected by vibration but may be confused by stray electromagnetic fields. The data from accelerometers can be used to derive rotational information, but gyroscopes provide much more accurate rotational results. On the other hand, gyroscopes are also subject to “drift,” which isn’t an issue with accelerometers and magnetometers.

So, conceptually, the lowest level of sensor fusion involves monitoring the outputs from all three types of sensors and using the data from each pair to correct for errors in the third member of the troika.

The next level of sensor fusion involves combining the data from multiple sensors to provide “situational awareness” that can be used to refine the system’s understanding as to what is occurring in the real world, thereby enabling it to make better decisions. Consider early fitness wearable devices, for example. Although they were reasonably accurate at measuring the number of steps taken on a solid floor, they tended to be confused by exercise machines like treadmills, and a short ride up an escalator might easily end up being counted as an extra 1,000 paces. By comparison, modern equivalents use the combination of sensor fusion and AI to filter out any extraneous noise, to determine if the wearer is walking, running, riding a bicycle, swimming, etc., and to count only legitimate exercise activities.

In the case of automobiles, another type of sensor fusion may be to gather and align-in-time the data from multiple sensors -- camera, lidar, radar, etc. -- that is presented to AI systems that can compare what the different sensors are reporting and demand extra caution if the conclusions derived from the individual sensors don’t agree.

Not surprisingly, most forms of sensor fusion require the manipulation of large quantities of real-time data with very low latency. Traditional von Neumann processor architectures are not ideally suited for this task. By comparison, field-programmable gate arrays (FPGAs) -- such as Lattice CrossLink™-NX devices -- are ideal for sensor fusion applications because their programmable fabric can be configured to perform sensor processing algorithms in a massively parallel fashion.

Furthermore, CrossLink-NX FPGAs include two hardened 4-lane MIPI D-PHY transceivers running at 10 Gbps per PHY, thereby allowing these devices to provide best-in-class performance for vision processing, sensor fusion, and AI inferencing applications.

The “cream on the top of the cake” is that CrossLink-NX FPGAs are fully supported by the Lattice mVision™ and sensAI™ solution stacks. The Lattice mVision solutions stack includes everything embedded vision system designers need to evaluate, develop, and deploy FPGA-based embedded vision applications, such as machine vision, robotics, ADAS, video surveillance, and drones. Meanwhile, the full-featured Lattice sensAI solution stack includes what developers need to evaluate, develop, and deploy FPGA-based AI/ML applications.

Performing sensor fusion (in any of its incarnations) at the Edge (whatever you consider that to be) is advantageous in terms of getting results quickly while conserving communications bandwidth, as opposed to shipping humongous amounts of data into the cloud and then waiting for any results to be returned. Furthermore, the massive parallel processing capabilities offered by low-power, high-performance FPGAs -- such as CrossLink-NX devices -- means that sensor fusion at the Edge is now both achievable and affordable.

Share:

Like most websites, we use cookies and similar technologies to enhance your user experience. We also allow third parties to place cookies on our website. By continuing to use this website you consent to the use of cookies as described in our Cookie Policy.