GPU buffering

From Virtual Reality, Augmented Reality Wiki
Revision as of 12:07, 14 December 2015 by Shadowdawn (talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Information icon1.png This page is a stub, please expand it if you have more information.
GPU buffering plays an important role in the motion-to-photon pipeline of current VR systems. Buffering optimization can greatly reduce latency and help minimize common problems such as simulator sickness or disorientation.

Latency in Virtual Reality

According to John Carmack, Virtual reality (VR) is one of the most demanding human-in-the-loop applications from a latency standpoint. The latency between the physical movement of a user’s head and updated photons from a head mounted display reaching their eyes is one of the most critical factors in providing a high-quality experience.

Human sensory systems can detect very small relative delays in parts of the visual or, especially, audio fields, but when absolute delays are below approximately 20 milliseconds they are generally imperceptible. Interactive 3D systems today typically have latencies that are several times that figure, but alternate configurations of the same hardware components can allow that target to be reached.[1]

How to Improve Latency by Preventing GPU Buffering

In order to create a VR system with low latency, we have to consider every component and its individual involvement. Current LCD displays have fairly poor pixel switching time and refresh rate. This means that even if the latency on the host level is minimal, there still will be a noticeable delay between every image update. However, these issues will be easily solved as more advanced displays become prevalent.

The following picture illustrates the classic processing model used in VR applications:

Render pipline basic1.png Source: “Latency Mitigation Strategies” by John Carmack

  • I = user input, S = simulation, R = rendering command, G = rendering of graphics, V = scanout, | = Vsync

As you can see, rendering takes place only after every rendering command has been issued. This introduces unnecessary latency and simultaneously suggests a simple solution. If GPU buffering is prevented, rendering can start as soon as the first command is issued, thus reducing 16ms of latency.

Render pipeline no gpu buffering1.png Source: http://www.chioka.in/category/virtual-reality/

References

  1. http://altdevblog.com/2013/02/22/latency-mitigation-strategies/