Lattice Blog


Bringing VR Experiences to Life with Positional Tracking Technology

Bringing VR Experiences to Life with Positional Tracking Technology
Posted 03/20/2018 by Ying Chen

Posted in

"Ryu faced his opponent and waited – waiting for the creature to make the first move... In a blink of an eye, his opponent charged forward with a roar like a bull. Ryu jerked back, aside. The blow meant for his torso whizzed by in a blur of knuckles. From the corner of his eye, he saw his opponent’s other arm begin an upward trajectory. Ryu ducked to the side and seized the perfect opening for a jumping uppercut..."

Virtual reality (VR) is more than just captivating displays with fancy 3D graphics. Fluid feedback of users’ gestures, postures, movements and position is pivotal for a complete immersive experience. If the user’s actions cannot be quickly and accurately tracked, it would be impossible to realize the fighting scene above. Worse yet, poor tracking can result in disconnected content and motion sickness. Hence, motion and positional tracking needs to be both accurate and low latency.

A complete tracking system employs motion tracking and positional tracking. Motion tracking is typically done with an inertial measurement unit (IMU), which uses a combination of accelerometers and gyroscopes, and sometimes also magnetometers. Positional tracking is often classified by how the setup is deployed. There are effectively two types of positional tracking used by the mainstream VR platforms: outside-in and inside-out each with pros and cons. Outside-in tracking typically requires less computing power, but additional set ups are needed to define the area to be tracked. Inside-out tracking has simpler setup but it requires more complex vision processing and computing performance.

Outside-in Tracking

VR platforms such as the Oculus Rift, HTC Vive and PSVR are based on the outside-in tracking, where external beacons or trackers are needed. HTC Vive uses SteamVR Lighthouses which act as beacons to communicate with the infrared receivers on the head mount displays (HMD) and accessories such as the handheld controllers; Oculus Rift and Sony PSVR use external camera sensors to track the markers on the HMD and accessories. These solutions require planned placements of trackers/beacons to define the use area.

The advantage of outside-in tracking is that the beacons or markers provide explicit signaling/patterns, which simplifies the tracking computation. There is less data to be processed, which lowers the computing power required and the tracking accuracy tends higher. On the flip side, the user must remain in the tracker/beacon's line of sight to avoid occlusion. This means the trackers/beacons need to be properly placed and the use area needs to be clear of tall furniture or plants. This could be a challenge for users with limited spaces.

Inside-out Tracking

Inside-out tracking typically uses stereo vision or time of flight sensors to detect and map the change in relative position. Pure marker-less inside-out tracking uses simultaneous localization and mapping (SLAM) algorithm to constructs or updates a map of an unknown environment, while simultaneously keeping track of the user’s location. Some platform uses external beacons to simplify the computation. Microsoft’s recently announced mixed reality headsets, Intel RealSense-based Project Alloy VR headset, and Ximmerse’s X-Cobra are all based on inside-out tracking.

The biggest advantage of inside-out tracking is the freedom to play in most rooms without any major set up. Such convenience does come with a price. Inside-out tracking is typically based on stereo vision and requires powerful machine vision processing. While complex processing may not be a problem for PCs, it is a big challenge for mobile processors that power HMDs. A power-efficient accelerator can greatly bridge the performance gap.

Lattice’s low latency and flexible FPGAs can be found in many of the platforms described above. In Valve’s SteamVR tracking, the low power and low cost iCE40 FPGA is used to perform low-latency and synchronized data capture from the infrared receivers. The CrossLink video bridging FPGA is also used in some inside-out tracking platform to interface and aggregate multiple MIPI CSI-2 image sensors for PC to perform SLAM processing. For mobile VR, Ximmerse’s X-Cobra uses the ECP5 FPGA not only for stereo camera interface and aggregation, but also for low-latency stereo vision processing as an accelerator to supplement the mobile processor. Other VR/AR solutions from Lattice includes: WirelessHD for wireless VR, which cuts the last cord to the fully immersive virtual world; multi-camera aggregation for 360 cameras; MIPI bridging for micro displays.

As VR technologies continue to evolve, we can expect newer systems to offer better tracking and more immersive experiences. Lattice’s comprehensive smart connectivity portfolio will continue to enable the evolution of this technology with low-latency, concurrent data capture and high-efficiency video computation in various aspects of positional tracking.