Positional tracking

From Virtual Reality and Augmented Reality Wiki
Jump to: navigation, search
See also: Tracking

Introduction

Positional tracking is a technology that allows a device to estimate its position relative to the environment around it. It uses a combination of hardware and software to achieve the detection of its absolute position. Positional tracking is an essential technology for virtual reality (VR), making it possible to track movement with six degrees of freedom (6DOF) [1] [2].

It has to be noted that head tracking is not the same as positional tracking. While head tracking only registers the rotation of the head (Rotational tracking), with movements such as pitch, yaw, and roll, positional tracking registers the exact position of the headset in space, recognizing forward/backward, up/down and left/right movement [3].

Positional tracking VR technology brings various benefits to the VR experience. It can change the viewpoint of the user to reflect different actions like jumping, ducking, or leaning forward; allow for an exact representation of the user’s hands and other objects in the virtual environment; increase the connection between the physical and virtual world by, for example, using hand position to move virtual objects by touch; and detect gestures by analyzing position over time [2] [4].

It is also known that positional tracking improves the 3D perception of the virtual environment because of parallax (the way objects closer to the eyes move faster than objects farther away). Parallax helps inform the brain about the perception of distance along with stereoscopy [2] [4]. Also, the 6DOF tracking helps reduce drastically motion sickness during the VR experience that is caused due the disconnect between the inputs of what is being seen with the eyes and what is being felt by the ear vestibular system [2] [3].

There are different methods of positional tracking. Choosing which one to apply is dependent on various factors such as the tracking accuracy and the refresh rate required, the tracking area, if the tracking is indoor or outdoor, cost, power consumption, computational power available, whether the tracked object is rigid or flexible, and whether the objects are well known of can change [4].

Positional tracking VR technology is a necessity for VR to work properly since an accurate representation of objects like the head or the hands in the virtual world contribute towards achieving immersion and a greater sense of presence [2] [3] [4] [5].

Methods of positional tracking

1. Markers on an HMD (Image: www.roadtovr.com)
2. Optical marker by Intersense (Image: www.roadtovr.com)

There are various methods of positional tracking. The description of the methods provided below is based on Boger (2014) [4].

Acoustic Tracking

The measurement of the time it takes for a known acoustic signal to travel between an emitter and a receiver is known as acoustic tracking. Generally, several transmitters are placed in the tracked area and various receivers placed on the tracked objects. The distance between the receiver and transmitter is calculated by the amount of time the acoustic signal takes to reach the receiver. However, for this to work, the system must be aware of when the acoustic signal was sent. The orientation of a rigid object can be known if this object has multiple receivers placed in a known position. The difference between the time of arrival of the acoustic signal to the multiple receivers will provide data about the orientation of the object relative to the transmitters.

One of the downsides of acoustic tracking is that it requires time-consuming calibration to function properly. The acoustic trackers are also susceptible to measurement error due to ambient disturbances such as noise and do not provide high update rates. Due to these disadvantages, acoustic tracking systems are commonly used with other sensors (e.g. inertial sensors) to provide better accuracy.

Intersense, an American technology company, has developed successful acoustic tracking systems.

Inertial Tracking

Inertial tracking is made possible by the use of accelerometers and gyroscopes. Accelerometers measure linear acceleration, which is used to calculate velocity and the position of the object relative to an initial point. This is possible due to the mathematical relationship between position over time and velocity, and velocity and acceleration (4). A gyroscope measures angular velocity. It is a solid-state component based on microelectromechanical systems (MEMS) technology and operates based on the same principles as a mechanical gyro. From the angular velocity data provided by the gyroscope, angular position relative to the initial point is calculated.

This technology is inexpensive and can provide high update rates as well as low latency. On the other side, the calculations (i.e. integration and double-integration) of the values given by the accelerometers (acceleration) and gyroscope (angular velocity) that lead to the object’s position can result in a significant drift in position information - decreasing this method’s accuracy.

Magnetic Tracking

This method measures the magnitude of the magnetic field in different directions. Normally, the system has a base station that generates a magnetic field, with the strength of the field diminishing as distance increases between the measurement point and base station. Furthermore, a magnetic field allows for the determination of orientation. For example, if the measured object is rotated, the distribution of the magnetic field along the various axes is modified.

In a controlled environment, magnetic tracking’s accuracy is good. However, it can be influenced by interference from conductive materials near the emitter of sensors, from other magnetic fields generated by other devices and from ferromagnetic materials in the tracking area. The Razor Hydra motion controllers is an example of implementation of this specific type of positional tracking in a product.

Most Head-mounted displays (HMDs) and smartphones contain IMUs or magnetometers that detect the magnetic field of Earth.

Optical Tracking

For optical tracking, there are various methods available. The commonality between them all is the use of cameras to gather positional information.

Tracking with markers

This optical tracking method uses a specific pattern of markers placed on an object (Figure 1). One or more cameras then seek the markers, using algorithms to extract the position of the object from the visible markers. From the difference between what the video camera is detecting and the known marker pattern, an algorithm calculates the position and orientation of the tracked object. The pattern of markers that are placed in the tracked object is not random. The number, location, and arrangement of the markers are carefully chosen in order to provide the system with as much information possible so the algorithms do not have missing data.

There are two types of markers: passive and active. Passive markers reflect infrared light (IR) towards the light source. In this case, the camera provides the IR signal that is reflected from the markers for detection. Active markers are IR lights that flash periodically and are detected by the cameras. Choosing between the two types of markers depends on several variables like distance, type of surface, required viewing direction, and others.

Tracking with visible markers

Visible markers (Figure 2) placed in a predetermined arrangement are also used in optical tracking. The camera detects the markers and their positions leading to the determination of the position and orientation of the object. For example, visible markers can be placed in a specific pattern on the tracking area, and an HMD with cameras would then use this to calculate its position. The shape and size of this type of markers can vary. What is important is that they can be easily identified by the cameras.

Markerless tracking

Objects can be tracked without markers if their geometry is known. With markerless tracking, the system camera searches and compares the received image with the known 3D model for features like edges or color transitions, for example.

Depth map tracking

A depth camera uses various technologies to create a real-time map of the distances of the objects in the tracking area from the camera. The tracking is performed by extracting the object to be tracked (e.g. hang) from the general depth map and analyzing it. An example of a depth map camera is Microsoft’s Kinect.

Sensor Fusion

Sensor fusion is a method of using more than one tracking technique in order to improve the detection of position and orientation of the tracked object. By using a combination of techniques, one method’s disadvantage can be compensated by another. An example of this would be the combination of inertial tracking and optical tracking. The former can develop drift, and the latter is susceptible to markers being hidden (occlusion). By combining both, if markers are occluded, the position can be estimated by the inertial trackers, and even if the optical markers are completely visible, the inertial sensors provide updates at a higher rate, improving the overall positional tracking.

Oculus Rift and HTC Vive’s positional tracking

The Oculus Rift positional tracking is different from the one the HTC Vive uses. While the Oculus Rift uses Constellation, an IR-LED array that is tracked by a camera, the HTC Vive uses Valve’s Lighthouse technology, which is a laser-based system [3].

In the Oculus Rift, movement is limited to the sight area of the camera - when not enough LEDs are in sight of the camera, the software relies on data sent by the headset’s IMU sensors. With Valve’s position tracking system, the tracking area is flooded with non-visible light which the HTC Vive detects using photosensors [3] [6].

Positional tracking and smartphones

Positional tracking in mobile VR still struggles to achieve a good level of accuracy mainly due to the power needed to handle a positional tracking VR system and the fact that using QR codes and cameras for tracking would contradict the essence of having a simple, intuitive, and mobile VR experience (3). Currently, mobile devices are limited by their form factor and can only track the movements of a user’s head. Nevertheless, companies are still investing in the development of an accurate positional tracking system for smartphones. Having this system available to anyone with a phone capable of VR would facilitate the adoption of VR by the general public, possibly unlocking the potential of the VR market [3] [7].

Types of positional tracking

Inside-out tracking - tracking camera is placed on the device (HMD) being tracking.

Outside-in tracking - tracking camera(s) is placed in the external environment where the tracked device (HMD) is within its view.

Markerless tracking - tracking system that does not use fiducial markers.

Markerless inside-out tracking - combines markerless tracking with inside-out tracking

Markerless outside-in tracking - combines markerless tracking with outside-in tracking

Comparison of tracking systems

See also: Comparison of tracking systems

There are several consumer-level tracking systems currently available. Originally, these were used for interaction with regular non-VR video games, but more recent tracking systems have been used for VR systems.

Brand & Model Tracking system Inside-out Outside-in Marker-based Marker
light
frequency
IMU Spacial
resolution
(mm)
Latency
(ms)
Facebook/Oculus Rift Constellation No Yes Yes Infrared Yes  ?  ?
HTC Vive/SteamVR Lighthouse Yes No Yes Infrared Yes 0.3 15
Microsoft HoloLens  ?? Yes No No Infrared Yes  ?  ?
Nintendo Wii Remote  ?? Yes No Yes Infrared Yes  ?  ?
Sony PSVR  ?? No Yes Yes Red/Green/Blue Yes  ? 18
Google WorldSense Yes No No  ? Yes  ?  ?

References

  1. StereoLabs. Positional Tracking. Retrieved from https://www.stereolabs.com/documentation/overview/positional-tracking/introduction.html
  2. 2.0 2.1 2.2 2.3 2.4 Lang, B. (2013). An introduction to positional tracking and degrees of freedom (DOF). Retrieved from http://www.roadtovr.com/introduction-positional-tracking-degrees-freedom-dof/
  3. 3.0 3.1 3.2 3.3 3.4 3.5 Rohr, F. (2015). Positional tracking in VR: what it is and how it works. Retrieved from http://data-reality.com/positional-tracking-in-vr-what-it-is-and-how-it-works
  4. 4.0 4.1 4.2 4.3 4.4 Boger, Y. (2014). Overview of positional tracking technologies for virtual reality. Retrieved from http://www.roadtovr.com/overview-of-positional-tracking-technologies-virtual-reality/
  5. RealVision. The dilemma of positional tracking in cinematic vr films. Retrieved from http://realvision.ae/blog/2016/06/the-dilemma-of-positional-tracking-in-cinematic-vr-films/
  6. Buckley, S. (2015). This is how Valve’s amazing lighthouse tracking technology works. Retrieved from http://gizmodo.com/this-is-how-valve-s-amazing-lighthouse-tracking-technol-1705356768
  7. Grubb, J. (2016). Why positional tracking for mobile virtual reality is so damn hard. Retrieved from https://venturebeat.com/2016/02/24/why-positional-tracking-for-mobile-virtual-reality-is-so-damn-hard