Binary Texture Generation as an Optimization Problem

Proposed Approach

The texture synthesis problem consists of generating an image visually similar to the provided texture sample. In case of binary textures this similarity can be quantified in terms of frequencies of small pixel patches. When viewed this way, texture synthesis becomes an optimization problem. We can start with a random image and optimize it until its patch distribution matches thats of the sample texture. Ideally we'd want to use larger patches to better capture the structure. There are 2N2 possible N x N patches, which makes 4 x 4 the largest practical size. Moreover, the distributions get increasingly sparse and unreliable as the patch size increases. To ameliorate this issue, the function subject to minimization is defined as the sum of statistical distances over the distributions of patches from 1 x 1 to 4 x 4. Some normalization may be required to ensure that all terms contribute significantly. Defects in the form of patches never occuring in the sample are inevitable. Lower order statistics acts as regularization, making sure that such defects at least look plausible. I've also experimented with another approach: running the optimization in stages with increasing patch size. This approach is trickier: if simulated annealing is used for optimization, the starting temperature must be carefully selected to ensure that the results of previous stages are not completely destroyed.

The simplest optimization strategy is hillclimbing: modify the pixels and accept the changes that reduce the objective function. I went with simulated annealing - a more powerful option that produces better results at a higher computational cost. It works by accepting random changes with probability Exp((f(x) - f(x')) / T), where f is the objective function, x is the current state, x' is the new state and T is a variable called temperature that decreases over time. Varying the temperature allows the algorithm to gradually transition from a "hot" random search phase to the "cold" hillclimbing phase. Exponentially decaying schedule aka geometric cooling is a good choice. The initial value can be estimated by performing some iterations at T = ∞ and calculating the standard deviation of the objective function. The final value should be small enough so that all detrimental changes are rejected (something like 10-6).

There are several options of modifying the pixels during the optimization. The simplest one is inverting a single random pixel. Higher quality results can be obtained by randomly modifying pixel blocks. In technical terms, this increases the search neighborhood size and allows the algorithm to find better local optima. Larger modifications however require more optimization steps. The block size can be selected at random to combine speed and quality. Another way to improve the efficiency is to copy the new pixel values from a random sample location instead of setting them at random. In any case, the small number of pixels modified at each step allows to incrementally update the patch distributions and the objective function instead of calculating everything from scratch. Here are some results generated with different options. Numbers denote the possible modification sizes, "R" stands for random and "S" for sample.

Optimization dynamics at zero temperature, 128 x 128 output, curves averaged over multiple algorithm runs:

The choice of distribution distance measure is important. A large number of such measures exist. I've limited my testing to the following ones:

L 2 norm: ∑ (a i - b i ) 2

norm: ∑ (a - b ) L 1 norm: ∑ |a i - b i |

norm: ∑ |a - b | Jensen-Shannon divergence: (∑ a i Ln(a i / m i ) + ∑ b i Ln(b i / m i )) / 2, m i = (a i + b i ) / 2

Ln(a / m ) + ∑ b Ln(b / m )) / 2, m = (a + b ) / 2 Squared Hellinger distance: ∑ (√ a i - √ b i ) 2

- √ ) Triangular discrimination: ∑ (a i - b i )2 / (a i + b i ),

a

b

whereanddenote the patch distributions. Hellinger distance, JSD and triangular discrimination give the best results, likely because they are more sensitive to frequency deviations of rare patches. JSD is costly to calculate because of the logarithm though.

Regarding the number of iterations - the more the better. Here are some results generated with different number of iterations per pixel (IPP) and 4 x 4 sample-based modifications:

Here is the algorithm in action. Exponential smoothing was applied to dampen the noise, so what you see is an average density of states.

Possible Extensions

A fun thing to try is to crossbreed different textures by providing two samples side by side:

Generating non-binary textures is possible in principle, but would require a more complex, slower implementation (hash tables instead of arrays for storing the distributions). Increasingly sparse distributions could also be a problem. The fact that the approach relies on small pixel blocks means that it cannot capture long-range dependencies and struggles with textures having large features or a regular structure. It may be possible to fix this deficiency by using the patch distributions at multiple scales. Instead of generating a new texture, the sample can be extended by limiting the pixel modifications to the area outside it. Image stylization is another extension requiring only a minor modification - an additional term for pixel-wise distance to the stylized image.

Implementation

Since simulated annealing algorithm is a part of my optimization framework, I only had to implement the problem definition. If you'd like to try generating some textures yourslef, you can either download the source code and compile the framework or download the precompiled binary from my Patreon page (patron only). The options are specified via Config.ini file:

Root section Algorithm: "SA" SizeX, SizeY: Output image size in pixels Sample: Path to the sample file (pbm format) Output: Path to the generated file (pbm format) RandChangeSize: Select the modification size at random, Yes / No (recommended: No) SamplePatches: Use patches from the sample, Yes / No (recommended: Yes)

"SA" section Iterations: the number of simulated annealing iterations (recommended: > 60 * SizeX * SizeY) T0Mode: initial temperature selection mode (recommended: "AutoLow") TfinEBased: final temperature selection based on precision (recommended: Yes) dEmin: precision (recommended: 1e-5) Acceptance: Acceptance function (recommended: "Exp") AcceptanceP: Acceptance function shape (recommended: 1) Schedule: Cooling schedule function (recommended: "Exp") ScheduleP: Cooling schedule shape (recommended: 1)



Follow me on Twitter to receive notifications about new blog posts. If you liked this post and would like to see more, consider supporting me on Patreon. Top patrons of the month: