Latency in Virtual Reality
According to John Carmack, Virtual reality (VR) is one of the most demanding human-in-the-loop applications from a latency standpoint. The latency between the physical movement of a user’s head and updated photons from a head mounted display reaching their eyes is one of the most critical factors in providing a high-quality experience.
Human sensory systems can detect very small relative delays in parts of the visual or, especially, audio fields, but when absolute delays are below approximately 20 milliseconds they are generally imperceptible. Interactive 3D systems today typically have latencies that are several times that figure, but alternate configurations of the same hardware components can allow that target to be reached.
How to Improve Latency by Preventing GPU Buffering
In order to create a VR system with low latency, we have to consider every component and its individual involvement. Current LCD displays have fairly poor pixel switching time and refresh rate. This means that even if the latency on the host level is minimal, there still will be a noticeable delay between every image update. However, these issues will be easily solved as more advanced displays become prevalent.
The following picture illustrates the classic processing model used in VR applications:
Source: “Latency Mitigation Strategies” by John Carmack
- I = user input, S = simulation, R = rendering command, G = rendering of graphics, V = scanout, | = Vsync
As you can see, rendering takes place only after every rendering command has been issued. This introduces unnecessary latency and simultaneously suggests a simple solution. If GPU buffering is prevented, rendering can start as soon as the first command is issued, thus reducing 16ms of latency.