Seurat

From Virtual Reality, Augmented Reality Wiki
Jump to: navigation, search
Seurat
Information
Industry Virtual Reality
Developer Google
Supported Devices VR Hardware

Seurat is Google’s new tool built to bring high-end 3D visuals that are usually seen in movies to mobile VR. Named after the famous French post-impressionist painter Georges Seurat, the tool has accomplished what other light-field approaches have failed to do so far. Seurat not only runs the CGI assets on mobile hardware, it does so at a fraction of a file size.

An Overview of Seurat

Seurat was introduced to the world by Google at the I/O 2017 Annual Developer Festival in California. The new rendering technology does something really remarkable. Even powerful desktop hardware that is designed to deliver high performance is not capable of running ultra high-quality CGI scenes in real time. Google has found an ingenious way to format the visuals and run on virtual reality headsets without affecting the level of detail. The converted 3D scenes on the VR hardware would be the near-perfect replica of its more complex counterpart.

The new technology isn’t just about running 360 videos. The users are not going to experience an environment from one static point, instead, games designed using this rendering technology will allow the users to move around in a room-scale environment. Also, the content that Seurat generates will fully retain real volumetric data and it would be clear and sharp.

An obvious benefit arising out of the new launch is that video game developers can now build games where players will be seen in the high fidelity environments generated using the Seurat tool.

How Seurat Works?

According to Google the process involves setting up of a few parameters. Firstly, the volume in which the user views and walks around is clearly defined. Secondly, the number of polygons that’s to be created and the Overdraw is defined.

Once the required parameters are clearly defined, the Seurat tool begins the conversion process employing what’s called the surface light-fields technology. From different areas of the predefined volume, the tool takes several images and converts them into polygons. The recreated scene, stitched using the polygons, is considerably smaller and simpler – capable of running on virtual reality headsets – but very identical to the original scene.

How is Google’s Seurat different from other light-field approaches? The biggest problem other approaches faced was the sheer amount of data they absorbed. They take so much data from the UHQ visuals that it becomes very difficult for them to deliver on virtual reality hardware.

Google’s innovative approach – using room-scale view boxes – gathers the same level of detail from the original asset to produce visuals that are only a few megabytes in size. What’s more; it’s very much possible to add dynamic, interactive elements as well.

The Seurat Demonstration – Rogue One: A Star Wars Story Experience

Google along with LucasFilm offered a demonstration of Seurat using the imperial hanger from Rogue One: A Star Wars Story. The formatted scene showed the animated CGI character K-2SO, the textures, and the lighting effects in incredible detail.

According to ILMxLAB, on a high-end PC, it took the company an hour to render the scene from the Star Wars movie. The visual effects division of LucasFilm then used Seurat to convert the scene into an experience that could be rendered on a mobile GPU in 13ms. To successfully convert the scene the polygon count had to be reduced by a factor of 1000 and texture size by a factor of 300.

To say the rendered environment is cinema quality is to stretch the truth. The experience that Seurat creates may not be perfect for now, but it is way better than what the previous approaches achieved and what we currently see running in virtual reality gadgets.

An Idea that Holds Promise

It cannot be denied that the Seurat tool has succeeded where others have not, but even Google had to make compromises to achieve its goal. In the rendered environment the users can only walk around a small area – the room-scale perspective region set by the developers – to view the scene. The rest of the scene will only be a backdrop. Actually, truth be told, Google hasn’t solved the problem of ultra high-quality 3D rendering in VR hardware. It has simply found a way to circumvent the problem.

Potential Benefits

Talking about the potential benefits of using Seurat, John Gaeta, creative director of ILMxLAB, said that the new tool clears the way for cinematic realism in virtual reality. He said that the problem that’s currently affecting virtual reality and hampering its growth is lack of great content. He further added that any technology or tool that helps create compelling interactive experiences on VR would be helpful.

Unanswered Questions

It’s still early days for Seurat, and at the event, Google did not reveal much, other than an overview of the new tool. The tech giant promised to give a deeper technical explanation about Seurat in the near future. In addition, there is no word on when the Seurat tool will be made available to content developers.

References

1. https://www.theverge.com/2017/5/18/15660218/google-seurat-daydream-mobile-vr-rendering-star-wars-io-2017

2. https://www.roadtovr.com/googles-seurat-surface-light-field-tech-graphical-breakthrough-mobile-vr/

3. https://newatlas.com/google-seurat-vr-rendering-tool/49601/

4. https://techcrunch.com/2017/05/18/googles-seurat-for-mobile-vr-means-you-can-finally-truly-step-into-star-wars/

5. https://www.engadget.com/2017/05/18/google-seurat-vr/