Ray tracing

From Virtual Reality and Augmented Reality Wiki
Jump to: navigation, search

Introduction

Ray tracing is a technique for rendering three-dimensional images with complex light interactions by tracing a path of light through pixels on an image plane. This technique can create graphics of mirrors, transparent surfaces, and shadows with very good results. [1] [2]

It is necessary to use a renderer that can simulate the light interactions occurring in the scene to achieve a sense of realism. These interactions can be the reflection of light, refraction, absorption, etc., and it is necessary a full knowledge of the scene when processing each individual pixel. A common rendering technique - a real-time rasterised renderer - does not really support such computations. [3]

Using ray tracing, rays are sent out into the scene to explore the surroundings when rendering a pixel. If a ray is interrupted by some piece of geometry, there is a shadow. Using a ray to find the color of an object, there are reflections. This allows for graphical effects that are not possible with traditional renderers. [1]

One of the downsides of ray tracing is that it requires a high amount of processing capability since firing rays into a scene to find their intersection with the scene geometry is complex and computationally intensive. [3]

The ray tracing technique

Figure 1. Primary ray and shadow ray. (Image: scratchapixel.com)
Figure 2. Shadow ray intersects another object. (Image: scratchapixel.com)

Some things have to be taken into account when trying to simulate a light-object interaction in a computer-generated image: without light, a person cannot see anything; without objects in the environment, light cannot be seen; and from a total number of rays that are reflected by an object, only a few will reach the surface of the eye. [4]

In case of computer graphics, the eyes are replaced with an image plane composed of pixels. The photons emitted by the light source will hit one of the pixels on the image plane, increasing its brightness value. Repeating this process several times until all pixels are adjusted leads to the creation of a computer-generated image. This technique is called forward ray tracing because the path of the photon from the light source to the observer is followed. [4]

This technique has a problem: not all of the reflected photons intersect the surface of the eye. In fact, since they are reflected in every possible direction, making each of them have a small probability of actually hitting the eye. This means that it would be necessary to simulate a vast number of photons coming from the light source and interacting with the objects in a scene, which is not a practical solution. [4]

The main difficulty is not in creating a large number of photons from the light source, but finding all of their intersections within the scene, which would be computationally costly. While it is technically possible to simulate the way light travels in nature, it is not the most efficient or practical technique. According to Turner Whitted, who wrote an influential paper called ‘An Improved Illumination Model for Shaded Display,’ “In an obvious approach to ray tracing, light rays emanating from a source are traced through their paths until they strike the viewer. Since only a few will reach the viewer, this approach is wasteful.” [4]

An alternative to forward ray tracing is backward tracing. In this case, instead of tracing rays from the light source to the receptor, the rays are traced backwards from the receptor to the objects. This is a convenient solution to the problem presented by the forward ray tracing technique. Since simulations cannot be as fast and perfect as nature, a compromise is made and a ray is traced from the receptor into the scene (called primary ray, visibility ray, or camera ray). If this ray hits an object, another ray can the sent from the hit point to the light source in order to find out how much light it receives. This second ray is called a light or shadow ray. When this ray is obstructed by another object, it means that the original hit point is in a shadow. [4]

It should be noted that some authors use the terms ‘forward tracing’ and ‘backward tracing’ with inverted meanings: in this case forward tracing would mean to trace the rays from the receptor to the objects and backward tracing to trace them from the light source to the receptor. [4]

In general, a ray tracing algorithm takes an image made of pixels and for each pixel, it shoots a primary ray into the scene. After the primary ray’s direction is set, the objects of the scene are checked to see if the ray intersects with any of them. It could be the case that the primary ray will intersect more than one object. When this happens, the object with the intersection point closest to the eye is selected. After this, a shadow ray is shot from the intersection to the light source (Figure 1). If this ray does not intersect an object, then the hit point is illuminated. If it does intersect another object, then that object casts a shadow on it (Figure 2). [4]

Repeating this operation for all pixels, a two-dimensional representation of a three-dimensional scene is obtained. [4]

Some characteristics of ray tracing

One of the advantages of ray tracing is that it takes just a few lines to code and it takes little effort to implement, unlike other algorithms like a scanline renderer. [4]

Ray tracing was first described by Arthur Appel on a paper published in 1969 entitled ‘Some Techniques for Shading Machine Renderings of Solids’. Although it is a valuable algorithm, the main reason for it not having replaced all other rendering algorithms is that it is a very time-consuming method, taking a long time to find the intersection between rays and geometry. Historically, this has been the major drawback of ray tracing but has become less of a problem as computers get faster. However, compared to other techniques (e.g. the z-buffer algorithm), ray tracing is still slower. [4]

Extending the idea of ray propagation, it is easy to simulate effects like reflection and refraction. These are essential when simulating glass materials or mirror surfaces. Turner Whitted described how to extend Appel’s ray tracing algorithm for more advanced rendering in his 1979 paper, ‘An Improved Illumination Model for Shaded Display’. [4]

Ray tracing and virtual reality

According to Einig (2017), in virtual reality, ray tracing makes it possible to “counter the lens distortion at the very first stage of the rendering process, instead of moving and stretching some pixels at the end of the render like in rasterisers. Even better, the amount of rays sent per pixel can vary depending on the pixel position in the frame, which means that it is trivial to implement foveated rendering, which tracks the eye and only draws the highest detail images where you are looking, and add precision where it matters.” [5]

On the technological side, in 2016, NVIDIA announced new SDKs and updates for NIVIDIA DesignWorks and NVIDIA VRWorks that improve the capabilities for interactive ray tracing. With the update, it is easier to create VR scenes and panoramas in their physically based ray tracing software. It is a matter of selecting a 360-degree camera from the list provided and a scene can be viewed as a fully ray traced VR experience with a single step. [5]

NVIDIA has also updated their OptiX ray tracing engine “to include support for NVIDIA NVLink and Pascal GPUs including the powerful new DGX-1 appliance with 8 high-performance NVIDIA GPUs per node. This allows the visualization of scenes as large as 64GB in size – never before possible using GPU rendering. OptiX is used in commercial applications such as Adobe After Effects, as well as in-house tools at studios like PIXAR.“ [5]

References

  1. 1.0 1.1 Rademacher, P. Ray tracing: Graphics for the masses. Retrieved from https://www.cs.unc.edu/~rademach/xroads-RT/RTarticle.html
  2. WhatIs. Ray tracing (raytracing, ray-tracing or ray casting). Retrieved from http://whatis.techtarget.com/definition/ray-tracing-raytracing-ray-tracing-or-ray-casting
  3. 3.0 3.1 Einig, M. (2017). How ray tracing is bringing disruption to the graphics market – and impacting VR. Retrieved from https://www.virtualreality-news.net/news/2017/mar/17/how-ray-tracing-bringing-disruption-graphics-market-and-impacting-vr/
  4. 4.00 4.01 4.02 4.03 4.04 4.05 4.06 4.07 4.08 4.09 4.10 Scratchapixel. Introduction to ray tracing: a simple method for creating 3D images. Retrieved from https://www.scratchapixel.com/lessons/3d-basic-rendering/introduction-to-ray-tracing/raytracing-algorithm-in-a-nutshell
  5. 5.0 5.1 5.2 Estes, G. (2016). New VR and ray tracing tools for developers. Retrieved from https://blogs.nvidia.com/blog/2016/07/25/nvidia-sdk-updates/

VR and AR  Wiki Discord Logo