

> {-# OPTIONS -fno-warn-missing-methods #-}





> sinc x = sin x/x





> sinc' 0 = 1

> sinc' x = sin x/x





> desingularise f x = re (f (D x 1))





> ex1 = desingularise sinc 0





> ex2 = desingularise (\x -> x/(exp x-1)) 0



(/)



> instance Fractional a => Fractional (Dual a) where

> D 0 a'/D 0 b' = D (a'/b') 0

> D a a'/D b b' = D (a/b) ((b*a'-a*b')/(b*b))



desingularise



> ex3 = desingularise (\x -> sin x^2/x^2) 0





> ex4 = desingularise (desingularise (\x -> sin x^2/x^2)) 0





> horrible x = exp (-1/x^2)





> data Dual a = D a a deriving (Show,Eq)



> re (D a _) = a

> im (D _ b) = b



> instance Num a => Num (Dual a) where

> fromInteger i = D (fromInteger i) 0

> (D a a')+(D b b') = D (a+b) (a'+b')

> (D a a')-(D b b') = D (a-b) (a'-b')

> (D a a')*(D b b') = D (a*b) (a*b'+a'*b)



> instance Floating a => Floating (Dual a) where

> cos (D a a') = D (cos a) (-sin a*a')

> sin (D a a') = D (sin a) (cos a*a')

> exp (D a a') = D (exp a) (exp a*a')



Here's a neat trick you can carry out with the Dual class that I've talked about many times before . Suppose you define a function likeClearly this function has a singularity at x=0 where the division isn't defined. But this is an example of a removable singularity because it is possible to replace this function with another that is continuous (in fact, analytic) everywhere and is well defined at x=0. We could do this simply by defining:but it'd be cool if we could automate the process. Here's a function that does exactly that:As if by magic, we can now computeWe can try other functions too:So how does it work? Well I've modified my usual definition ofso that it is able to divide one infinitesimal number by another:'perturbs' the value of its argument infinitesimally so that when we pass in an argument of zero we end up evaluating our function not at zero, but somewhere infinitesimally close. So our division by zero becomes division of infinitesimals and the above definition comes into play. The first clause of the definition can also be viewed as an application of l'Hôpital's rule But it doesn't solve all of our problems. For exampledoesn't quite work. You need two applications of l'Hôpital's rule. We can implement this easily withThat's fine if you know how bad your singularity is in advance. If you don't, you can use power series instead. None of these methods will work for this function however:Anyway, the fact that it doesn't work for all functions doesn't detract from the fact that it's a pretty approach from solving the problem when you have a fairly well behaved singularity.I've not actually seen anyone else use dual numbers to smooth over singularities in this way, but a few years ago I did come across something very closely related that apparently caused a little controversy in the computational geometry world.Suppose you want to determine whether or not a point lies within a possibly non-convex and arbitrarily complicated polygon. One algorithm is this: follow a line in any direction from the point to infinity (though just outside the bounding box of the polygon would do) and count how many times you cross the boundary of the polygon. If you count an odd number of crossings you're in the polygon, otherwise you're outside. It's beautifully simple and if nobody had thought of it before it'd be worthy of publication.But like the methods described in many computational geometry papers I've read, this one often fails in practice. There are at least two sources of difficulty. Firstly, computational geometry algorithms are subject to numerical errors. You often find yourself having to decide whether or not a point is on one side or another of a line, say, and getting inconsistent results because two different ways of computing the same thing result in different decisions being made because of rounding. That can be fixed by using something like arbitrary precision rational arithmetic. But even if you do this, there is still a problem.When you follow a line from the point you're testing you may find that it doesn't simply cross one of the sides of the polygon, it may travel precisely through a vertex. If your code isn't written carefully you may accidentally count that as zero, one or two crossings. It's easy to write sloppy code that gets it right most of the time, but it's hard to work out every possible case where some kind of degeneracy might happen. What happens if the outward ray is actually collinear with a polygon face? Do vertex intersections and collinearities cover everything?An ugly kludge is this: at the first sign of trouble, restart the algorithm but jiggle all the inputs randomly a bit. For a large class of algorithms this will work tolerably well, depending on your definition of tolerable. (Compare with Sard's theorem in Morse theory.) But it really is ugly, especially if you've gone through the hassle of using arbitrary precision arithmetic. But maybe you can see what's coming...Back in '94 Edelsbrunner and M√ºcke came up with the idea of deforming the geometry infinitesimally, giving their method the utterly bizarre name "Simulation of Simplicity". I think it took people a while to realise that they were actually doing the same thing as people doing automatic differentiation and so the whole notion of an infinitesimal perturbation left people wondering what in fact they were doing. More seriously was this: writing robust computational geometry code that catches all of the degenerate cases is hard. How is it that such a simple algorithm could be slotted into a sloppily written polymorphic algorithm and turn into a watertight one? Was everyone in computational geometry going to be put out of work?Luckily for the computational geometers it still takes quite a bit of work to get the details exactly right, and even when you do, the result isn't necessarily an efficient solution to the problem. I know that when I had a messy computational geometry problem to solve when I was working on MI:2 I just wrote code that turned a blind eye to degeneracies and aggressively pruned away anything that looked suspicious: like near zero length lines and near zero area faces. If a stray pixel poked through somewhere that it shouldn't, and tweaking the tolerance didn't fix it, an artist found a way to hide it. Sometimes beauty and elegance come second place to getting the job done on time.