Enable Normal map baking (check that it will calculate in tangent space) and Base color map.

In the xNormal global settings, you can select between 8/16/32 bits for different image formats. I prefer 16bit and tiff.

Bake Normal map and Base color maps

After this, open a result normal map in Photoshop. Check all the areas where you have small details, caves or multiple small elements like a palm of a hand. If we set the wrong min/max distances, we’ll not be able to see continuous colors. In such case, adjustment of the min/max settings is needed. If everything is fine, next is the removing of the alpha channel and save the image with the maximum compression as PNG (lossless, supports 16bit and has better compression than ZIP tiff).

For using in SketchFab or Marmoset Viewer, it is better to down-sample the map to 4k (if the original is in 8k) and save a copy as JPEG with a maximum quality (image will get converted to 8bit during saving).

Back in xNormal, disable the normals and base maps, and enable ambient occlusion and cavity maps. Then change x4 to x1. Next, bake these maps. This process is very slow, so you can minimize xNormal, and decrease its priority in the Task Manager.

You can start work on the base texture.

After xNormal finishes baking the AO and cavity maps, open the map in Photoshop, delete the alpha channel and convert RGB to Grayscale.

Double check for errors in areas with small details. Next, save it as 8k PNG for archive and down-sample it to 2k or 1k for SketcFab or Marmoset (as the shadow details don’t require high resolution, and will load faster). And save with maximum quality JPG.

BASE COLOR TEXTURE

If your shots are correct and your cameras aligned correctly, the texture should be quite good and only needs fixing in areas where the camera has captured the surfaces from a high angle or where reconstructing invisible surfaces.

The best tool for this purpose is the Healing Brush and Patch from Content-Aware tools in Photoshop. These allow you to clone and fuse a texture from another part of the image to the part we want to restore/recreate.

This process requires technical and artistic skills, and especially attention to details. You’ll need to find similar, good parts of the image that can be used as a source, and follow (as much as possible) the UV map polygon distortions.

You’ll need to remember that all UV islands have extended borders around them that require proper rendering. A 1 black pixel border on a polygon edge will be visible on a close-up render. Or used in real-time engines, small level MIP-maps can make such an edge highly visible. This is why the border is so important (and needs to be 8px or even bigger).

For recovering large areas, you can use the Content-Aware Fill or Patch tool.

If you ever need to recover a texture that is split between UV islands, this can cause a bit of a problem. In that situation, you will need to create another temporary UV layout where the edge merged to one UV island and you can continuously “paint” on a new texture. First, bake the texture from a high resolution to the temporary UV layout texture, and then recover the parts. After that, bake the corrected texture from the temporary UV map to the working UV map back.

You’ll need to remember that all new texture conversions remove small details. That is why it is good to use a higher resolution for temporary UV maps. You even can increase the size of UV island that you need a fix. And use double size resolution – 16k temporary map for 8k working map.

After, merge the fixed and original textures in Photoshop with the layers and masks, by masking all areas that were not touched in the temporary map.

Because this is mostly done in Photoshop (or another preferred tool), we will not cover it in this tutorial, but I hope you understand the basic idea.

ASIDE

You have probably found it a bit strange to use 8bit source images and texture on the high-resolution mesh and baking it to 16bit. Yes, you saw that right.

If you de-shadow and de-light the image in the pre-processing step, then the difference between the 16 and 8bit is just the noise in the LSB part of the image. That is why we can use 8bit images for texturing.

Any texture generation or baking step have sub-pixel transformations, especially if we bake from 16/32k texture to 8/4K. This step will always add additional bits enough for 16bit.

All the possible details from the shadows that can require 16bit pipeline are already recovered in the preprocessing step, and we don’t increase any additional details on the shadows that exist on the baked texture.

Most likely, due to this workflow, I didn’t have success with the Unity Unlit tool, because the textures are already de-lighted and require only small touches and color corrections.

Any color correction of textures, it is better to work in 16bit.

If you know how to use the 16/32bit pipeline better, why read this tutorial? 

SKETCHFAB

At last, we have our low poly OBJ mesh.

For Sketchfab we’ll use 1-2K maps for the AO and cavity, 2-4K for normals and albedo (and if used 2-4K maps for specular/glossiness).

We can pack all files with 7zip (as a .zip or .7z archive) and upload them to SketchFab. Or simply pack the OBJ and MTL files, or only FBX mesh file. You can later add all the textures via the SketcFab 3D Editor.

A useful trick: If you want to upload a “scene” to SketchFab containing more than one object, you will need to export all meshes as OBJ and create an empty file sketchfab.zbrush, then upload as an archive. In such case, SketchFab will treat the separate objects as one scene file.

SKETCHFAB 3D EDITOR

I hope you have successfully added all the required materials to your model.

Normals from xNormal usually don’t require a Y-flip.

Ambient Occlusion looks better with “Occlude specularity” ON and at 80-90%. Cavity with 50%.