We’ve also developed a ton of other powerful nodes such automatic seam removal, compression artifact removal, light gradient removal. We’ve even created a single node called “Material Generation” that offers a comparable feature set to other software packages such as Substance B2M, Knald or Crazy Bump.

Working with scanned materials

It’s important to note that our tech is not procedural. It is based on Artificial Intelligence which makes a big difference and has implications for the end user experience when working on scanned materials. Our approach is to actually automate large parts of this workflow and cut down on the time it takes dramatically.

Procedures, on the other hand, are handmade scripts that use math and random numbers (noise) to make art. These scripts need to be manually tailored to a specific piece of art, and it’s very labor intensive.

Procedural tools present a number of challenges when applied to a scan-based workflow as they don’t have any analysis or understanding of the content they’re grooming. Consequently, there’s no one general purpose procedure that works well on what is arbitrary data. The ones typically in use today suffer from artifacts and general quality issues. One such strategy is known as Texture Bombing, which takes image data as input and randomly mixes patches around, blending them together. This can sometimes work with very simple textures that don’t have a structure or unique features. The other general purpose strategy is known as Graph Cuts. This second approach is less about blending patches together and more about finding ideal cuts between patches which minimize seams. Graph Cuts have two problems:

(1) They don’t fully remove seam artifacts, rather they reduce and redistribute seams.

(2) Getting the best cut makes it difficult to get the size you want, typically a games artist will want their final texture to be a power of two.

This is difficult using the Graph Cuts method without resizing the scans, losing both unique features and high fidelity details. General purpose procedures can never create anything new. When you do procedural, you start with a fixed algorithm and you have to tailor your data to it. When you go to A.I. you start with data and the algorithm tailors itself. So A.I. can be a one size fits all solution that can produce great results on a wide variety of scans. Where A.I. can sometimes struggle is when there’s very little data to extrapolate out. This can happen when a scan isn’t a texture, it’s just a few very unique features and the A.I. can’t find enough redundancy to learn the key aspects of that texture.

In general, though, I think a huge testament to our tech approach is that we haven’t even released a full-fledged product yet and many leaders in the scanning space have already found ways to use us regardless. We’ve built a better mousetrap and it is fair to say we are getting an incredible response from the market.

We’ve been incorporating feedback from the Unity demo team for over 18 months to optimize our solution for the next generation of scanning workflows. Our web prototype was their method of choice (18:50 & 23:50) for grooming the scans they captured for their recent photogrammetry demo into self-tiling, ready to go assets.