Lost and found

10^8

x^8

(^) :: (Num a, Integral b) => a -> b -> a _ ^ 0 = 1 x ^ n | n > 0 = f x (n-1) x where f _ 0 y = y f a d y = g a d where g b i | even i = g (b*b) (i `quot` 2) | otherwise = f b (i-1) (b*y) _ ^ _ = error "Prelude.^: negative exponent"

Traced values

Prelude> :m +Debug.Traced Prelude Debug.Traced> let x = 10 :: Traced AsValue Integer Prelude Debug.Traced> let y = x ^ 8 Prelude Debug.Traced> y 100000000 Prelude Debug.Traced> :t y y :: Traced AsValue Integer Prelude Debug.Traced> asExp y 10 * 10 * (10 * 10) * (10 * 10 * (10 * 10)) Prelude Debug.Traced> asSharedExp y let _2 = 10 * 10; in _2 * _2 * (_2 * (10 * 10)) Prelude Debug.Traced> :t asSharedExp y asSharedExp y :: Traced AsExp Integer Prelude Debug.Traced>

Traced Integer

x^8

showAsExp

showAsExp

showAsShared

showAsShared

unsafePerformIO

Prelude Debug.Traced> asSharedExp $ let a = 1 + 2 in a * a let _1 = 1 + 2; in _1 * _1 Prelude Debug.Traced> asSharedExp $ (1+2) * (1+2) (1 + 2) * (1 + 2) Prelude Debug.Traced>

If I writein Haskell, how many multiplications will be used to compute the power? A stupid question? Well, for this example, but if I was computingand x has 100000 digits then I'd care. So how can I find out? I can look at the definition of the exponentiation operator. Here it is, from the Haskell report and GHC 6.8:It's a bit involved, but decipherable. Another way would be to insert some kind of debug trace message in the multiplication.I'd like to show a different way. Here's a ghci session:So what's going on? The value of x is, which means that there's some magic going on. The variable can be used as usual, for instance in computing. A traced value can also be shown as an expression, which is whatdoes. So a traced value is somewhat like the symbolic values I had in an earlier post, but in addition to having a symbolic representation they also have a normal value. But the output fromdoesn't really help in answering how many multiplications there are, since the shown expression has no sharing; it is totally flattened. Thefunction is the black magic here, it recovers the sharing, and we can see what happened. What we see is that there are actually five (5) multiplications involved in computing 10^8. This shows that the definition of exponentiation is suboptimal, since it can be done with three multiplications (three repeated squarings). Thereally does have somemagic. It recovers information that is not part of the Haskell semantics, so from that we can conclude that it must contain the powerful incantationsomewhere. How does it reveal implementation secrets? Look at this:The let expression and the expression where the variable has been expanded are semantically equal in Haskell, so no (pure) function can possibly be able to give different results for them.

OK, so how does it work? I'll show a simplified version of the Traced module here that only deals with one traced type, but it can be extended. The (soon to be available) hackage package contains the extended version.

In the Traced type we need to represent expressions. We only need constants and function applications.

data Traced a = Con a | Apply a String [Traced a]

The function application contains the value, the name of the function, and the arguments the function was applied to.

In the exported interface from the module we want to be able to convert to and from the Traced type. Nothing exciting here.

traced :: a -> Traced a traced = Con unTraced :: Traced a -> a unTraced (Con x) = x unTraced (Apply x _ _) = x

instance (Show a) => Show (Traced a) where show = show . unTraced

instance (Eq a) => Eq (Traced a) where x == y = unTraced x == unTraced y

Num

fromInteger

instance (Num a) => Num (Traced a) where (+) = apply2 "+" (+) (-) = apply2 "-" (-) (*) = apply2 "*" (*) negate = apply1 "-" negate abs = apply1 "abs" abs signum = apply1 "signum" signum fromInteger = traced . fromInteger apply1 s f x = Apply (f (unTraced x)) s [x] apply2 s f x y = Apply (f (unTraced x) (unTraced y)) s [x, y]

We want to show a traced value the same way we show the underlying value.Comparing for equality, we simply compare the underlying values.And we'll make traced numbers be an instance of. All the functions (except) build apply nodes.A fancier version of this module could make the Traced type an applicative functor etc., but that's not really so important.

Finally, we want to be able to show a traced expression as an expression tree instead of a value.

Traced t -> String showAsExp (Con x) = show x showAsExp (Apply _ s [x,y]) | not (isAlpha (head s)) = "(" ++ showAsExp x ++ " " ++ s ++ " " ++ showAsExp y ++ ")" showAsExp (Apply _ s xs) = "(" ++ concat (intersperse " " $ s : map showAsExp xs) ++ ")"

module Traced(Traced, traced, unTraced, showAsExp) where

Traced> putStrLn $ showAsExp $ 10^8 (((10 * 10) * (10 * 10)) * ((10 * 10) * (10 * 10))) Traced>

We only export what is necessary, so the module header should beA quick test:And now we need the black magic to recover the sharing. We would like to have a unique label in each node of the expression tree. If we only had that we could see when two things referred to the same subexpression, and use the label to refer to it instead of the value. If we were doing this in,., Java we could use object identity for this purpose. If we were doing it in C, we'd just compare pointers to the structs containing the expressions. But we're doing it in Haskell and none of this is available. It's not unavailable because Haskell wants to make our lives difficult, quite the contrary. Languages that allow pointer comparisons (object identity) must introduce an extra level of indirection in the semantics to explain how this is possible. So now it's not enough to know we have the number 5, we need to know that this is the number 5 at location 1000. And that's not the same as the number 5 at location 1010. The numbers contained in the locations might be the same, but the locations are not interchangable since they could,, be mutated differently in the future.

So is everything lost in Haskell? Not at all. GHC implements a library of stable names, which is (at first approximation) the same as the address of something in memory. The API to System.Mem.StableName is very simple.

data StableName a makeStableName :: a -> IO (StableName a) hashStableName :: StableName a -> Int

makeStableName

StableName

hashStableName

Thefunction is like the & operator (address of) in C. It returns the "address" of something. Sois like a C pointer type. And thefunction converts the "address" to an int,, like casting it to int in C. (In the simplified code below we'll assume that two stable names never hash to the same Int, although this is not absolutely guaranteed.)

How come this interface is OK? For instance, calling makeStableName on semantically equal values can yield different results if the values happen to be stored in different parts of memory. It's ok, because the returned value is in the IO monad. In the IO monad anything can happen, so it's perfectly reasonable that the same argument yields different results in different calls.

Despite the name and the documentation the GHC stable names have a stability flaw. The stable name changes when an unevaluated expression is evaluated. It's annoying, but not a major flaw. But once evaluated the stable names is guaranteed to remain the same. (The implementation of stable names is not as simple as taking the address of an object, since the GC can move objects around.)

So now we have a way to get the identity of each node in the expression tree built by the Traced type. The plan is to traverse the expression tree. For each node we'll use the "address" of it as it's name, and remember that we've seen this node. As we traverse the tree we build a list of nodes we've seen. This list then corresponds to let bindings we'd like to display as the final result. As we traverse the nodes we'll also replace each node with a reference to its name instead, so we can see the sharing in the result.

To be able to represent the expression with rediscovered sharing we need to extend the Traced type. We need variable references and let bindings. In fact, we'll only generate a top level let binding, but we include it in the data type anyway.

data Traced a ... | Var a Name | Let [(Name, Traced a)] (Traced a) type Name = String

Var

unTraced

... unTraced (Var x _) = x unTraced (Let _ e) = unTraced e

... showAsExp (Var _ n) = n showAsExp (Let bs e) = "let " ++ concatMap bind bs ++ "in " ++ showAsExp e where bind (n, e) = n ++ " = " ++ showAsExp e ++ "; "

Var

makeStableName

type TState = (M.IntMap (Traced a), [(Name, Traced a)]) share :: Traced a -> StateT TState IO (Traced a)

import Control.Monad.State import qualified Data.IntMap as M import System.Mem.StableName

share e@(Con _) = return e share e@(Apply v s xs) = do (sm, bs) return ie Nothing -> do let n = "_" ++ show h ie = Var v n put (M.insert h ie sm, bs) xs' (sm', (n, Apply v s xs') : bs') return ie

Var

Var

We store the value in theconstructor to makepossible. New cases:And we want to show the new constructors:To rediscover the sharing we need to keep some state. We need a mapping from the node address to thethat should replace it, and we need to accumulate the bindings,, the pairs of node name and expression. So we need a state monad to keep track of the state. We also need to be able to callin the IO monad, so we need IO as well. We'll do this by using the state transformer monad on top of IO. So the function that discovers sharing will take a traced value and return a new traced value, all in this monad. So we have the type:Assuming some importsNow the body:Constants are easy, we don't bother sharing them (we could, but it's not that interesting). For an apply node, we get its stable name (it's already evaluated, so it won't change) and hash it. We then grab the map (it's an IntMap; a fast map from Int to anything) from the state and look up the node. If the node is found we just return the expression from the map. If it's not in the map, we invent a name (using the hash value) and insert anode in the map so we'll never process this node again. We then recursively traverse all the children of the apply node, and rebuild a new apply node with those new children. This constitutes a binding and we stick it in the accumulated list of bindings. Finally we return the newnode.

At the top level we need to call share and then build a let expression. The bindings are put in the list with the top node first in the list, so it looks nicer to reverse it.