Lattice Blog

Share:

Want Embedded Vision? Got MIPI?

Want Embedded Vision? Got MIPI?
Posted 04/17/2020 by PJ Chiang

Posted in

It’s not so long ago that systems equipped with embedded vision capabilities were physically huge and ferociously expensive. As recently as 10 years ago, few people would have believed that things like home doorbells would be vision-enabled, yet now we think nothing of our doorbells alerting us with live video streams of shipping companies delivering packages to our front doors (and nefarious scoundrels removing these packages again before we return home).

Today, the number of applications employing embedded vision are growing exponentially, ranging from sophisticated industrial robots performing random pick-and-place, to autonomous mobile robots (AMRs) navigating their way around an uncontrolled environment in which the landscape may be constantly changing. Industries that are rapidly adopting embedded vision include automotive, consumer smart home, medical, security/surveillance, and a wide range of industrial applications.

One somewhat unexpected player in the embedded vision world is the MIPI Alliance and its sensor (camera) and display protocols. When it was first introduced, MIPI was focused on mobile applications like smartphones. However, low-cost, high-bandwidth MIPI solutions are now appearing all over the place.

Of course, nothing is simple. In many cases, developers wish to invigorate existing systems by keeping their legacy processor (and its existing code), while upgrading the rest of the system with new, more efficient, lower power, MIPI-enabled sensors and/or displays. (In this context, the term "processor" may refer to System-on-Chip (SoC), application-specific standard part (ASSP), and application processor (AP) devices.)

Want Embedded Vision? Got MIPI? Problem 1

Problem: How to make a legacy processor work with MIPI-enabled sensors and/or displays.

Another common scenario is for developers creating new systems to opt for a MIPI-enabled processor, while wishing to continue using tried-and-tested legacy (non-MIPI) sensors and/or displays.

Want Embedded Vision? Got MIPI? Problem 2

Problem: How to make legacy sensors and/or displays work with a MIPI-enabled processor

Both of these are examples of bridging problems, where the term "bridging" refers to converting video signals from one interface standard to another. In both of these cases, there is a low-cost, high-performance solution in the form of Lattice Semiconductors’ CrossLink FPGAs, which are optimized for high-speed video and sensor applications.

Want Embedded Vision? Got MIPI? Solution

Solution: CrossLink FPGAs address legacy-to-MIPI bridging scenarios.

Augmenting traditional programmable fabric with hardened PHYs, CrossLink FPGAs provide the industry's fastest MIPI D-PHY bridging solution supporting 4K UHD resolution at speeds up to 12 Gbps. Furthermore, CrossLink devices are available in amazingly small 2.46 x 2.46 mm WLCSP packages and BGA packages with 0.4 mm, 0.5 mm, and 0.65 mm pitches.

As impressive as all this may be, bridging applications are only the “tip of the iceberg” with regard to the capabilities of CrossLink FPGAs in embedded vision systems.

In the case of safety-critical systems, for example, it may be necessary to take the video stream from a sensor and duplicate it to feed multiple processors. The idea here is that, should one of the processors fail, there needs to be a redundant backup. A corresponding situation, known as display splitting, occurs when it is required to take a video signal being generated by the system's processor and split this signal so as to feed multiple displays. Yet another scenario is that of sensor aggregation, in which video streams from multiple sensors are aggregated into a single stream before being fed to the processor.

In all of these cases, CrossLink FPGAs can provide the solution. To make things easy for developers, CrossLink FPGAs are supported by a library of IP modules. These modules, which are provided royalty free, are focused on receiving video data, converting video data, and transmitting video data.

But wait, there’s more, because we also have a whitepaper that discusses trends in embedded vision, introduces MIPI in more detail, takes a deeper dive into the architecture of CrossLink FPGAs, explores various design scenarios in greater depth, and provides a high-level view of the CrossLink design process.

All that remains is for me to wish you a happy CrossLink design experience!

Share: