Program

Keynotes

The story of NVIDIA RTX (Steve Parker / NVIDIA)

Over a decade ago, NVIDIA began exploring the use of ray tracing in real-time applications. This culminated in NVIDIA RTX, introduced last year in the Turing architecture. NVIDIA RTX brings two new capabilities to modern real-time computer graphics: real-time ray tracing with new RT Cores, and deep learning through Tensor Cores. With a powerful combination of hardware and software, Turing brings a significant advancement in real-time ray tracing performance that previously was thought to be out of reach for several years.

With these architectures, ray tracing can now be used in real-time for accurate reflections, ambient occlusion, area light shadows and even global illumination. We will show some beautiful results recently achieved by NVIDIA and partners in gaming, visual effects, scientific visualization and even audio processing.

In this talk we will explore the journey and evolution of RTX. We will cover some of the key elements of RTX including highlights of the Turing architecture and the ray tracing APIs. We will discuss how it is used for real-time path tracing, hybrid algorithms and for accelerating traditional rendering applications.

Finally, we speculate on what the future may hold for ray tracing as bottlenecks shift dramatically in rendering algorithms, both real-time and offline.

Managing ultra-high complexity in real-time graphics: Some hints and ingredients (Fabrice Neyret / CNRS / INRIA / Grenoble University)

Flyovers of natural scenes probably illustrate the worst of it: an overload of details in the foreground, content continuing way past the horizon and view frustum, possibly animated at various scales (e.g. billowing clouds or flowing water in an Amazonian landscape), and we want all this looking realistic and artifact-free — and a look controllable by the artist, please.

The numbers involved will always outpace by many orders of magnitude the computation and memory resources of computers. Simply clamping details — so nineties — is no longer an option because in the real world their influence does emerge in the final appearance. So we had better be smart.

I will illustrate various hints and ingredients about how to tackle this from my life-long experience in dealing with all aspects of natural scenes (and more) and in exploring how to best model and represent complexity as minimally-required, efficiently-manageable information.

Modern movie rendering: How ray tracing changed my industry (Luca Fascione / Weta)

The movie industry is in the last steps of completing a shift in rendering technology from rasterization-based workflows to path tracing-based ones. We will discuss how and why this change has happened, and propose ideas for where this new path may lead.

Bio: Luca Fascione is Senior Head of Technology & Research at Weta Digital, where he oversees Weta’s core R&D efforts including Simulation and Rendering Research, Software Engineering and Production Engineering. Luca is the lead architect of Weta Digital’s next-generation proprietary renderer, Manuka. This renderer is the culmination of a three-year research endeavour involving over 40 researchers and continues to allow Weta Digital to produce highly complex images with unprecedented fidelity. Luca joined Weta Digital in 2004 and has also worked for Pixar Animation Studios. Through a partnership with NVIDIA, Luca co-developed the GPU-based PantaRay that was instrumental in the making of the movie Avatar, and (since 2011’s The Adventures of Tintin) also became the foundation of volumetric shadow support within the Weta pipeline. Luca was recently recognized with a Scientific and Engineering award from the Academy of Motion Pictures for his work on FACETS, Weta’s facial motion capture system.

Why learn something you already know? (Jaakko Lehtinen / NVIDIA, Aalto University)

While computer graphics has many faces, a central one is the fact that it enables creation of photorealistic pictures by simulating light propagation, motion, shape, appearance, and so on. In this talk, I’ll argue that this ability puts graphics research in a unique position to make fundamental contributions to machine learning and AI, while solving its own longstanding problems.

The majority of modern high-performing machine learning models are not particularly interpretable; you cannot, say, interrogate an image-generating Generative Adversarial Network (GAN) to truly tease apart shape, appearance, lighting, and motion, or directly instruct an image classifier to pay attention to shape instead of texture. Yet, reasoning in such terms is the bread and butter of graphics algorithms! I argue that tightly combining the power of modern machine learning models with sophisticated graphics simulators will enable us to push the learning beyond pixels, into the physically meaningful, interpretable constituents of the world that are all tied together by the fact they come together under well-understood physical processes to form pictures. Of course, such “simulator-based inference” or “analysis by synthesis” is seeing an increasing interest in the research community, but I’ll try to convince you that what we’re seeing at the moment is just a small sample of things to come.

Hot3D Sessions

Mobile GPU Power and Performance (Andrew Gruber / Qualcomm)

Abstract: Mobile GPUs need to live within the power and heat dissipation constraints of a device carried in your pocket – yet they are surprisingly capable. This talk will explore their capabilities relative to desktop devices and discuss their design and implementation differences and similarities.

Bio: Andrew Gruber is VP of GPU architecture at Qualcomm. He has been designing GPUs for 25 years, starting with the first ATI 3D chip – the 3D rage. He and his team created the first ‘unified’ Shader Processor that appeared in the Xbox 360. For the past 10 years, he has lead the GPU architecture team for the Adreno series of mobile GPUs. He has in excess of 75 GPU related patents. He graduated from MIT with a BSEE in 1981.

Open Image Denoise – Open Source Denoising for Ray Tracing (Attila Afra / Intel)

Intel® Open Image Denoise is a recently released open source library of high-performance, high-quality denoising filters for images rendered with raytracing. At the heart of the library is an efficient deep learning based denoising filter, which was trained to be suitable for both interactive previews and final-frame rendering. Open Image Denoise supports Intel® 64 architecture based CPUs and compatible architectures, and automatically exploits modern instruction sets like SSE4, AVX2, and AVX-512. A simple but flexible C/C++ API ensures that the library can be easily integrated into most existing or new ray tracing based rendering applications. In the first half of the talk, we will give an overview of the Open Image Denoise library. We will discuss the main features of the library and the used denoising algorithm. In the second half, we will briefly present the API (through a couple of code examples) and some results showcasing both the denoising quality and performance.

NVIDIA’s Turing: More Than Ray Tracing and AI (Yury Uralsky / NVIDIA)

The Turing architecture is riddled with new features. The GPU’s buzz has been largely around its ray tracing and deep learning capabilities. In this talk we will focus on Turing’s other powerful graphics features, which enable greater efficiency, performance, and scene complexity. We will discuss the new functionality’s use and implementation, covering mesh shading, variable rate shading, texture space shading, and view instancing.