SWInE : Simplicial Weight Interpolation and Extrapolation

I've been procrastinating on this for more than ten years. The thing is, this particular algorithm is my brainchild. While working on my thesis, I stole exactly three ideas from other people, made them work together, and gave that compilation a funny name. This counts as authorship, right?

I had my fun playing with the concept but the conclusion I had to put into my thesis was: “SWInE is rather promising”. In academic language, this means “useless”. Almost everything you can do with SWInE, you can do with splines better. And this brings the dilemma. On one hand, I can share a unique piece of knowledge very few people in the world possess; on the other hand, this piece of knowledge is worthless.

Finally, I came up with this argument. The three ideas behind the algorithm are prominent. And if an algorithm helps show them in action, then maybe this is what makes the algorithm worthy.

Idea 1: inverse weights

Let's say we have two points (x 1 , y 1 ) and (x 2 , y 2 ). There is a function F(x) that fits these points. Fits means that

F(x 1 ) = y 1

F(x 2 ) = y 2

When we determine this function in between points, it is called interpolation. When do that on the left and on the right of the points — extrapolation.

Using the inverse weights is the way to build the interpolating function everywhere apart from the points themselves.

∀ x, x i < x < x i+1 F(x) = y i * k(x - x i ) + y i+1 * k(x i+1 - x) k(x - x i ) + k(x i+1 - x)

k(x) = 1 xn

Yes, there would be a zero division in x i so the function is left undefined there. But we can fix that with the magic of limits. The function becomes infinitely close to the points in their respected proximities, so we can make it continuous by defining it in the points explicitly

F(x i ) = y i

Also, the function outside the points can be defined as the first point on the left, and as the last point on the right. This will work as a primitive extrapolation.

The n-coefficient in the weight function affects the way the function looks. You can experiment with it on the following interactive plot.

Do: interpolation , extrapolation , both ;



with the weight coefficient: - +

When the weight coefficient equals zero, the function becomes discontinuous. Negative coefficients are useless for interpolation. However, fractional coefficients may be useful. They bring more control over the function with no additional changes in the algorithm.

Mathematically, the inverse weights are simple. But there is no such thing as a limit in floating-point computation, there is no infinitely small proximity. There is no grace, only digits and errors. You have to define the interpolating function explicitly not only in x i points but in some measurable proximity too.

Then again, every computation on floating-point numbers is discrete so pragmatically this is just one more “guess an epsilon” problem.

Idea 2: basis functions

We can now turn points into functions. That's awesome. But with inverse weights, the interpolating function always makes a plateau near the x i points. What if we want it to have a slope instead?

The idea of basis functions is: we can interpolate functions instead of points.

F(x) = f i (x) * k(x - x i ) + f i+1 (x) * k(x i+1 - x) k(x - x i ) + k(x i+1 - x)

And we can switch values to basis functions for the extrapolation as well.

Technically, every continuous function f(x) such as f(x i ) = y i can be a basis function. Let's try things out with a few selected types of polynomial functions: the constant (which is a 0-polynomial in a way), the linear function, and the quadratic function.

f i (x) = y i f i (x) = ax + b ; ax i + b = y i f i (x) = ax2 + bx + c ; ax i 2 + bx i + c = y i

The magic of inverse weights is in their indifference. They can work with any basis functions and it's easy to mix them up in a single formula to get something you want.

On this interactive plot, you can see how different types of basis functions work together in different configurations.

Please select one of the following configurations:

constant-constant-constant-constant ;

constant-linear-linear-constant ;

linear-linear-linear-linear ;

constant-quadratic-quadratic-constant ;

linear-quadratic-quadratic-linear ;

quadratic-quadratic-quadratic-quadratic .



And the weight coefficient: - +

With inverse weights and basis functions, we have almost too much freedom. We can have our function smooth on some interval, and sharp at one particular point. Or we can make it lay on a plateau near some point, and then fit some curvature in an interval.

But so far it only concerns single variable functions. If we want to generalize weight interpolation to the n-dimensional case, we have to use an n-dimensional interval generalization.

Idea 3: self-referential n- simplex data structure

A point is a 0-simplex. A line segment, a piece of line between two points, is a 1-simplex. A triangle, a region on a plane inside three line segments, is a 2-simplex. A tetrahedron is a 3-simplex. And if you can imagine how tetrahedron becomes a point while traveling through the 4-th dimension, that would be a 4-simplex. I hope you see a pattern here.

An n-simplex is the space between n+1 (n-1)-simplices.

This makes it a self-referential data structure. If we want to define an interpolating function F 1 on a line, F 2 on a triangle, and F 3 on a tetrahedron, we only have to define F n once on an n-simplex.

With basis functions weight interpolation, you already have your interpolating function defined in the basis points.

F(x) = F n (x) x = x i ⇒ F n (x) = f i (x)

Consequently, this defines F 0 (x). For all the other n-simplices, the self-referential formula F n (x) looks like this.

∀ x, x ≠ x i F n (x) = n+1 k(|x j - x|) F n-1 (x j ) Σ j=1 n+1 k(|x j - x|) Σ j=1

The formula looks almost like the one with the basis functions except now all the variables in bold are points: x, x i , x j .

Speaking of which, since we want to interpolate basis functions in an n-simplex, we should compute our basis functions out of distances to (n-1)-simplices. The x j is the projection of x onto the j-th (n-1)-simplex. The |x j - x| is then the distance from a point to its projection.

Weight functions, basis functions, and now the (n-1)-simplex projections. This is enough to define an interpolation function on a triangle or a tetrahedron. But what about more complex shapes?

When we have a bunch of simplices that touch each other but never overlap, this is a simplicial complex. If you're ok with some representation error, you can cover any shape with a simplicial complex.

And what we can't cover with a complex, we can cover with spatial extrapolation.

With weight and basis functions, you can mix your interpolating function in n-simplices with interpolating functions on (n-1)-simplices. Now if a point isn't in an n-simplex but has a projection on an (n-1)-simplex, then computing the weights on this projection and applying them to the basis functions will make an extrapolation that will seamlessly merge with the interpolation.

The same goes to (n-2)-simplices, (n-3) or (n-k) ones. Let's devise an algorithm by dividing interpolation and extrapolation into 3 cases.

If a point is in a triangle, then the F 2 applies; if a point has a projection on the complex border line segments, then the F 1 works instead; if a point doesn't have the projection, then the basis function of the nearest point f i is the extrapolation function.

Well, this particular algorithm doesn't work with concave shapes. Technically, we can still turn any concave simplicial complex into a convex one by adding more simplices. That's just more coding.

Here is an interactive plot. It has 6 triangles combined in a complex. Every point carries its own shade of gray, and the rest of the picture is the result of simplicial weight interpolation and extrapolation. The points are moveable, but the relations between triangles are hard-coded.

Do: interpolation , extrapolation , both .



With basis function: constant ; linear .



And the weight coefficient: - +

And that's how we got from a pair of points to Simplicial Weight Interpolation and Extrapolation. Just as promised in the title.

P. S.

The code for all the plots on GitHub: