







I occasionally receive comments that my work doesn't appear computer generated. For me, that is the highest form of praise: my magnetism towards generative art started when I realized its true power to produce work like that. Here I'll detail some of the techniques I use in order to elevate the natural elements of my work.





This post is language and framework agnostic. We're only going to talk technique.





Please note: I am not claiming that art that looks natural is in some way inherently "better" than artwork that looks digital. Naturality need not be your end goal. It is just something I strive for as a matter of taste.





Guiding principle: Model your artwork's process after the real world





When possible, think about how you would design the artwork you're after by hand or otherwise how some natural occurrence might produce your work. Modeling an actual physical process is more likely to produce more natural looking artwork.





For example: uniformly distributed data doesn't typically show up in nature. Most data follows a bell curve of some kind. Using normally distributed random variables (those with a midpoint and standard deviation) -- rather than plucking from some range or list -- often produces more natural looking effects.





Lines





Let's assume we have a data type representing a line segment in our pocket. In

order to draw it, we'd typically do something like this:





1. Start a path at the first point

2. Move to the end point

3. Stroke the path





This might look something like this:









Too perfect for my taste!





Random slant





Suppose we're drawing several horizontal lines and we want them to look more natural. The first step might be to tweak the endpoints a little. We can do this by offsetting each endpoint slightly using a normal distribution with a small standard deviation in the x and y directions:





Line wobble





Now we have a line that is slightly offset from its user-defined position. It also varies slightly in length. However, it's still perfectly straight. That doesn't mimic the natural world all that well; let's make it less straight.





One way to do this is as follows:





1. Choose N points interpolated between the endpoints of the line segment to create a new path.

2. Tweak each point in the path.

3. Smooth out the path and draw it.





There are many variations on this pattern. A simple one is to take N linearly interpolated samples between the start and end points and tweak each one using a static standard deviation like before. I'll use Chaikin curve smoothing to produce a smooth path. Here's what that looks like:

















Neighbor influence





Great, now we are getting somewhere. Varying the standard deviation here will allow us to control the "wobbliness" of our line. However, you may notice that the wobbles are totally independent of one another. The line may wobble up at one sample, then far down at another. Here's a way to smooth it out.





Take the average of every pair of adjacent points in your resultant path after tweaking to create a new path (this is a very simple example of kernel smoothing ). You can perform several averaging passes to produce a smoother line. Draw this new path as a curve. Now each point is influenced by its neighbors in a more natural way:

















I'd say the shaping looks pretty decent now. If I tried to draw a straight line on paper, it might look something like this. We can modify the standard deviation in our sample offsetting function to model a more or less precise hand.





Note: One alternative here is to use a one-dimensional noise function to offset each point in the path. That will also produce a smooth curve but it will be slightly more predictable depending on the noise function you are using. A bit more on noise later in the post.





Texturizing lines





Now our line is shaped pretty naturally. But, we're still drawing with a perfectly smooth "pen" when we call `stroke`. In reality, there are slight variations in texture when we use a tool to draw. One way to introduce such variation is to use the sand spline technique as popularized by Anders Hoff (aka inconvergent) The idea is simple:





1. Tweak each point in the original path very slightly using a normal

distribution.

2. Generate a few new points between each adjacent pair of points, and draw them

as tiny dots

3. Repeat several times





If your dots are super small you'll get a texture like this:









This is only one way to add texture to a line, but it's one of my favorite starting points. experiment with your own textures and see what works for you. See sighack's"Fifteen ways to draw a line" for more inspiration in this area.





"Dreaming of the Desert" (2018). Textured lines are at the forefront of a lot of my work.





Shapes





Now that we can draw lines in an interesting way, let's graduate to the second dimension: shapes. We'll focus on quadrilaterals (quads) here for simplicity but these techniques can be applied to many shapes.





We can simply draw a quad and stroke its edges:

















We can apply the same principle as above to skew the shape a bit by tweaking each corner:

















We can select and smooth points between each adjacent coordinate to get a wobbly quad:

















We can smooth it with neighbor averaging too. Since the path is now cyclical, we have to be careful to influence the last point by the first coordinate too:













None of this is new. We're effectively talking about a closed path here since we are just stroking the edges. How about filling the space? We can use fill but that's a little boring:

















To make the texture a bit more interesting, maybe we want to stipple the quadrilateral by generating a ton of points inside it*. We'll simplify the shape again to make bounds checking easier (stippling more complex polygons can be done, but let's keep it simple).

























* Note: this can be done by splitting the quad into two triangles and generating a point in either one with a uniform distribution)





It's a bit messy though because a uniform distribution isn't as intuitive as it may seem. To achieve a distribution of dots that looks more natural, we can pull values from one of my favorite sequences -- the (2,3) Halton sequence .





The (2,3) Halton sequence generates points in the range (0, 1) × (0, 1) in R2. In order to fill a quad with the dots generated in this sequence, we can:





1. Find the minimal bounding square containing the `Quad`

2. Generate N points in the (2,3) halton sequence

3. Scale each point by the width (or height) of the minimal bounding square

4. Translate each point by the top left coordinate of the square

5. Filter out any points that aren't contained in the `Quad`





This can be adapted to work with any shape that supports bounding by a square and point containment checking. The easiest way to try it out is by stippling a square, on which square bounding and point containment checks are both trivial operations.





Anyway, the (2,3) halton sequence can be generated easily; see the pseudocode on

wikipedia ]. Here's a "stippled" fill with 10000 points:













Since the (2,3) halton sequence is not randomly generated, you might find that this produces some samey looking textures. A couple of ways of avoiding this are:





1. Scaling the perfect bounding square up and optionally jiggling it around a little to offset the sequence's center

2. Dropping N points from the beginning of the sequence

3. Randomly dropping a small portion of points from the sequence (but be careful because this will introduce more noise)





Of course you can combine these as well. Play with it!





Another common way of generating "stippled" textures is using Poisson-Disk Sampling . It is a stochastic process, but is also more expensive to compute and harder to implement. It's nice though. I used that technique in my Wire series as seen below .





Wire T









Of course this is not the only texture. I talk a bit about some other textures in my StrangeLoop talk from 2018 if you'd like to see some other examples.





Color





Getting a feel for which colors work well together is really, really hard. It's so easy to pick some colors that you think will work well together just to realize that they completely clash and make your work look ten times worse than it did in black and white.





And I'd argue generative artists have it especially hard. The default way of playing with colors in generative art is by manually tweaking hard-coded variables and re-generating an image, and playing around with color is really the best way to get a feel for how to use them. Can we do better?





Of course we can. Here's my approach, which I adapted from Joshua Davis . The gist is:





1. In a separate program (I use AutoDesk SketchBook ), construct a horizontal gradient

strip and save it as a png.

2. Read the png as pixels in the generative art program, then interpolate

between the beginning and end of the gradient using some easing function*

(a function with output range [0,1])





This approach works really well because you can easily modify the gradient and see quite clearly what looks good and what doesn't. It allows for really quick iteration. Additionally, most modern digital painting programs have the ability to mimic subtractive color mixing, which really helps with natural looking color transitions.





* Note: easing functions are everywhere. Check out Fast and Funky 1D Nonlinear Transformations if you don't believe me.





We'll use the following gradient to color the dots in the previous example (with 20 times as many dots):

















The easing function we'll use is linear on the y value. 0 is mapped to the y value of the top of the quad and 1 is mapped to the y-value of the bottom. Importantly, the result of the easing function (between 0 and 1) is slightly offset with a normally distributed random value. This helps form a less uniform texture:









Nice. And that's just a ridiculously simple easing function (and a gradient I put together in 5 minutes). Imagine the possibilities!





Using noise effectively





Speaking of easing functions, how about noise? Perlin noise doesn't strictly satisfy the definition of an easing function -- its values vary between -1 and 1 (...ish, it's a bit more complicated than that ). But with some clever mapping, clamping outliers, floating point modular arithmetic, and/or other tricks, we can of course limit the range to [0,1].





Here's one of my Modular series, where I used 2d noise as an easing function for colors:





The variations in color are subtle, but that's the point. They make each shape "pop" and give the piece depth without overwhelming the viewer with too much unnecessary complexity.





If you look closely, you'll notice that the crosshatching texture is also driven by noise:













Note that raw Perlin noise can start to feel a little lifeless after a while. Try experimenting with fractal brownian motion or developing your own noise functions to toy with. Tyler Hobbs's essay "Flow Fields" is a good place to start digging in further (Perlin noise forms an infinite vector field; flow fields are a useful generalization).





Physicality





Back to our guiding principle above, one way to really crank up the naturality

of artwork is to simulate actual physics.





This idea is covered in great depth by Nature of Code , which I will just straight up recommend in this section instead of re-stating Daniel Shiffman's work. Daniel breaks up complex physics problems into easily digestible chunks, making it quite easy to implement the ideas. To produce artwork, I often run a physical simulation for a while and take a "snapshot" at some point, which becomes the final image.





For example, in Rosewater Serum , gravity is applied to a ball which bounces around in each square in the grid. The direction of the gravity is determined by a backing noise field, and each ball's path is traced:









The core of Dust Bowl is a simulation of a rubber band being snapped:













To me, these pieces feel alive. And it's no surprise, really -- they are sticking closely to our guiding principle:

Model your artwork's process after the real world

When we focus on emulating natural processes in code, we get back more natural work as a result.





All in all, chasing naturality in my work is what I love to do. I'm willing to make my computer spend some extra time generating textures or simulating physics for me in order to produce a more lively feeling piece of work. I hope this article has given you a good kicking off point for experimenting with naturality in your own work.





PS: No code was provided in this article for a reason. The ideas are general enough to apply in many environments, and I did not want to alienate anyone by choosing a specific stack. That said, if you are trying to implement something from this article and get stuck, I would be happy to assist you in some capacity. Just email me: bkovach13 [at] gmail [dot] com.



