Yae Sakura Tech Demo Release date 02/01/2019 Music Togen Renka Website https://www.bilibili.com/video/av42250009 Unity China article: https://connect.unity.com/p/ba-zhong-ying-duan-pian-zhi-zuo-ji-zhu-fen-xiang



The "Yae Sakura" short film participants are about 20 people (6 artists participate in full-time, others are part-time. 40% of the 20 are program engineers and 60% are artists). The official production time is from June 2018 to January 2019, and officially released on February 1, 2019. He Jia holds a master's degree from the University of Science and Technology of China. He has served as a research and development engineer for NVIDIA Semiconductor Technology. He is currently the technical director and art director of moHoYo, and focuses on real-time rendering of PBR, NPR cartoon rendering, process animation and interactive physics. He is committed to using Unity to make high quality cartoon CG rendering.

What was the original intention and background of the short film "Yae Sakura"?

Mainly by combining the latest technology to explore new rendering styles and performance possibilities, as well as accumulating engine real-time rendering experience in making CG animations. The recognition of the short film also verified the ability of miHoYo to control the combination of technology research and art.

Why choose Unity and use the real-time rendering engine to create this animated short?

We've made a comprehensive customization of Unity's entire rendering process: rendering pipelines, material systems, special effects, and the entire post-processing process are completely rewritten around the final performance. These highly customized content is relatively easy to implement on Unity (The HDRP-based default template develops a customized HDRP rendering pipeline and creates a specialized Shader for the customized HDRP rendering pipeline)

Using real-time rendering to make animated shorts Whether it's from rendering overhead or effect adjustment Iteration cost is extremely low, and it's natural to say that rendering efficiency is natural. From the perspective of iterative efficiency, we can intuitively adjust the lens in the form of WYSIWYG. Lights, materials, etc, and it takes only two hours to output a 4K 60fps 5-minute video for final effect iteration (using AVPro Movie Capture), which is less likely than traditional processes.

Our production team is also a relatively young team, and it is precisely because of the above advantages that the production of this film is possible.

What are the functions of Unity HDRP in this video, please analyze it from a technical perspective?

Before "Yae Sakura", we used Unity's default rendering pipeline to render two other videos. During the development process, we encountered a lot of limitations in the development of the default pipeline. These limitations are mainly due to the high degree of encapsulation of the default rendering pipeline, which makes us unable to control the rendering of many effects freely. The following examples illustrate:

The first problem encountered was the inability to precisely control the light source. In order to achieve high-quality character shadows, we need to specify a light source for the character and determine whether the current light source is the main character of the character when rendering: Yes, use a high-quality shadow map, otherwise use the built-in default shadow map. Unity's default pipeline does not allow us to easily traverse the entire list of lights, and we can't know the corresponding relationship between the light source and the actual light source in the scene in the shader, which makes it impossible to flexibly balance the character's self-shadow and default shadows under multiple light sources. In the new HDRP rendering pipeline, all the light source lists can be traversed in one pass, and the light source id can be used to determine whether it is the main light source of the character, which fundamentally solves our problem.

The second problem is that under Unity's default pipeline, we can't flexibly control the rendering of many screen space textures, such as camera depth texture, camera depth normal texture, camera motion vector texture, etc. These screen space textures are usually used by Unity for replacement. The shader is rendered, almost completely black box for the user. In order to achieve stylized cartoon rendering, we need to highly customize the rendering of these textures. For example, character rendering requires an extra pass to implement the stroke effect, and the replacement shader does not support multi-pass rendering, which makes many screen effects have problems at the stroke, such as DOF, TAA, SSAO, etc. In particular, the stroke area has a great influence on the quality of the TAA. In order to render the correct motion vector for the stroked area, we need to get the normal, tangent and other data of the skinned mesh of the previous frame (if the grid of ABC format is used) Or even a dual quaternion, we even need more data), and we can't easily get this data under Unity's default pipeline. Under the HDRP rendering pipeline, we can fully control the rendering of these textures through custom lightmode and shader, which greatly facilitates our development.

In addition, the HDRP rendering pipeline allows us to use a large number of real-time light sources in the scene, and hybrid Deferred & Forward Rendering greatly increases the flexibility of rendering. Thanks to tile & cluster lighting, even objects that require Forward Rendering can be efficiently rendered using a large amount of real-time light. Some post-screening effects, such as SSR and SSAO, can be correctly applied to deferred and Forward Rendering simultaneously under the HDRP pipeline. In Unity's default rendering pipeline, these effects are not friendly to Forward Rendering. At the same time, the flexible rendering pipeline allows us to customize a lot of stylized effects, such as the stylized lighting, fog, decal system mentioned above.

Overall, the original Unity default rendering pipeline caused a lot of obstacles to rendering development due to the high degree of encapsulation, making the efficient implementation of many effects become impossible or even impossible. The new HDRP rendering pipeline uses a more advanced and versatile rendering architecture, giving developers great freedom and openness, which is critical for achieving high-quality gaming or movie rendering. Although there are still a lot of low-level renderings in HDRP (such as camera culling), many rendering APIs are still not open and the functionality is not yet stable, but we believe that for capable development teams, HDRP is definitely the future of Unity. The best choice for quality rendering.

How does this beautiful cloth solution work?

In the cloth part, we use Quacloth (Maya plugin) to solve the problem and import unity through the alembic format (abbreviated as abc format).

The size of the abc animation file has a lot to do with the number of models. We control the fabric body to about 20,000 wireframes, which basically preserves the mesh precision required for the wrinkle simulation. To ensure the smoothness of the surface, you need to subdivide it during rendering. The animation data of 30FPS in 4 minutes is only about 1GB, which is a relatively moderate size.

After importing the engine, we used our own GPU-based Catmull-Clark Subdivsion to render an absolutely smooth cloth surface, and the entire process was very satisfactory in terms of performance.

Because it is offline baked data, there will be a certain data exchange cost for the effect iteration. In order to further improve the iterative efficiency, we also completed the GPU Position Based Cloth Simulation. In the case of simulating 4w particles, it is possible to run 60FPS in real time and exhibit natural wrinkles. This will greatly reduce the time required for the effect iteration, and the GPU-based real-time simulation will start to be applied in the subsequent production process.

Can you tell us about the details of hair implementation?

The basic diffuse shading uses a multi-layer toon ramp to achieve rich shadows and color gradients and details. The highlight calculation uses an anisotropic material and uses two layers of high and low frequency highlights to achieve richer levels and variations.

In order to reflect the more stylized outline shape, we also use the parametric curve combined with jitter noise to define the shape of the hair outlines. These parameters can be adjusted in real time, which is convenient for the final result. If the cartoon hair is to be calculated in real time, the requirements for the model production are relatively high. It is necessary to make the requirements according to the strict hard surface to ensure the outline smoothness.

How does the eyes look so godlike? Did you use a special Shader?

The eyes are the performance points of the cartoon rendering core. On the one hand, we hope that the eyes can reflect the texture from different angles, and on the other hand, we must guarantee the feeling of the cartoon illustration. Therefore, we specially customize a special shader for the eyes, which mainly simulates the eye refraction and beam focus. The effect of the dispersion, combined with the hand-painted texture material, finally achieved the desired effect.