In the previous post, I went over the protocols and file formats required for the BitTorrent protocol. This article will focus on laying down a strong foundation for our future efforts.

To ease into our main topic gradually, let’s first do something simple, like asking the tracker for peers.

This will be an HTTP request. It usually goes to /announce , but the exact URL is stored in MetaInfo. For reference, all the options are documented here. However, this reference behavior is unsupported by many modern trackers, so we will implement our first extension right away to comply with them. This extension simply asks the tracker to provide a tightly packed array of peer IP addresses. We do that by including the compact parameter. Following our get-it-working-first approach, we come up with a quick and dirty solution:

queryTracker :: ClientState -> IO [ SockAddr ] queryTracker state = do let meta = metaInfo state url = fromByteString ( announce meta ) >>= parseUrl req = setQueryString [ ( "peer_id" , Just ( myPeerId state )) , ( "info_hash" , Just ( infoHash meta )) , ( "compact" , Just "1" ) , ( "port" , Just ( BC . pack $ show globalPort )) , ( "uploaded" , Just "0" ) , ( "downloaded" , Just "0" ) , ( "left" , Just ( BC . pack $ show $ Meta . length $ info meta )) ] ( fromJust url ) manager <- newManager defaultManagerSettings response <- httpLbs req manager let body = BL . toStrict $ responseBody response case AC . parseOnly value body of Right v -> return $ decodePeers $ v ^. ( bkey "peers" . bstring ) _ -> return []

The result, as you could have guessed by the lenses, is a dictionary with the peers key being of interest to us. We will skip decodePeers for now, there’s been enough on binary data handling in the previous article. For now, the uploaded , downloaded , and left options are hardcoded. We don’t support resuming interrupted downloads just yet.

Now that we have a list of peers in the network, we should reach out to them to start the conversation.

reachOutToPeer :: ClientState -> SockAddr -> IO () reachOutToPeer state addr = do sock <- socket AF_INET Stream defaultProtocol connect sock addr handle <- socketToHandle sock ReadWriteMode input <- BL . hGetContents handle writeHandshake handle state let Just ( nextInput , BHandshake hisInfoHash hisId ) = readHandshake input ourInfoHash = infoHash $ metaInfo state bitField = BF . newBitField ( pieceCount state ) when ( hisInfoHash == ourInfoHash ) $ do let pData = newPeer bitField addr hisId mainPeerLoop state pData nextInput handle

First, we open the connection and then lazily take all its input into a ByteString. We exchange handshakes and if everything looks correct, we’re entering the main peer loop, in which we’ll handle all further events and messages.

Main peer loop

Now we have to implement the actual algorithms to drive the protocol (and our downloads). Initially, I did this by not using any abstractions and making all of it IO actions. As you might imagine, this quickly got out of hand and became unmaintainable. Modifying data layout or just changing the logic in general was very error-prone and verbose. I missed the times when I had been implementing the parsers – pure data is just easier to work with and test. So I looked for a way to make this more manageable. Having found free monads, I decided to use them.

If you don’t know free monads, this would be the time to check them out – this is a good introduction. However, I will do my best to explain what we are gaining by using them and I promise to stay away from the theory. In fact, I’ll avoid any definitions and instead focus on things that are possible by using them.

We want to handle each peer connection (the main peer loop) in its own thread. As far as I understand, this should work best with the I/O scheduler in GHC. We won’t need any manual sleeps or polling for events. Instead, we’ll try to take full advantage of the multi-threaded, preemptively-scheduled event loop the GHC runtime has to offer.

So what kinds of events do we need to handle in our main loop? Obviously, all the messages the peer sends have to reach us. We will also define a way to push global messages to our peer loop. To explain why we need a way to broadcast global messages, let me present a very real example of how a peer connection may play out.

Suppose we request a chunk from a peer and we mark it as requested somewhere. This ensures other peers don’t waste bandwidth by downloading the same data we are. What happens though, if this peer disconnects or crashed before it receives the data? Other peer loops can’t detect this alone, so the piece would never be completed.

One solution would be to remember all the requests in-flight for this peer and clean up after ourselves on exception. This is perfectly valid and we’ll do that, but there there still remains an edge case. It may happen that all other pieces are complete when our peer loop crashes, so freeing the piece alone won’t solve this completely. We need a way to wake up other peer loops to be sure the piece gets picked up. Shared messages will save us here: after cleaning up after an exception, we will broadcast a wakeup message to all peer loops.

That’s a lot of details already and we haven’t even written a line of peer loop code yet. Don’t think this was my oversight from the beginning – I learned all of this the hard way.

To concat all these event sources into one, we can define some abstractions already:

data PeerEvent = PWPEvent PWP | SharedEvent -- nothing for now

And to consume that, let’s define the first operation for our monad:

getPeerEvent :: F PeerMonad PeerEvent

I’m using F here: the Church-encoded free monad for our functor PeerMonad .

Operations

The point of all this is to define the most basic operations we need for our protocol logic in this monad. It won’t do anything on its own just yet, but we can already start blocking out the requirements.

Let’s think about the interface for our monad. Starting from the obvious ones, we will need some form of access to the MetaInfo and PeerData.

getPeerData :: F PeerMonad PeerData getMeta :: F PeerMonad MetaInfo

We are talking to a peer after all, so we need a way to send him messages:

emit :: PWP -> F PeerMonad ()

Since our PeerData object contains booleans that change over time, we need a way to mutate this data:

updatePeerData :: PeerData -> F PeerMonad ()

Obviously, the goal of our application is to exchange data we keep on a hard drive, so we need a way to access that:

readData :: Offset -> Length -> F PeerMonad ByteString writeData :: Offset -> ByteString -> F PeerMonad ()

Finally, we need some shared memory with other peer loops for multiple things:

runMemory :: F MemoryMonad a -> F PeerMonad a

Shared memory

MemoryMonad will be a different free monad with its own operations. We need it to be atomic, so we’ll model it to map well onto Transactional Memory, its natural target for execution. It will support reads and writes for all shared structures.

What do we need to share with other peer loops to coordinate our efforts and work together?

The full BitField of the active torrent to see which pieces we can serve and which need to be downloaded.

Chunk fields for each incomplete piece to split them up into parts any peer can contribute to by downloading.

Intermediate piece availability data, so we have a cached version from all peers. This will be particularly helpful for our initial rare-first algorithm.

Protocol logic

This was a very lengthy introduction. Let’s check if all this effort was worth it by building up real expressions that we may use in our implementation.

Have

Suppose we were to handle the Have message that we’ve just received from a peer. If you don’t recall the previous post, it tells that the peer has completed the piece. We need to act on this message – we may have just located a source for the last missing piece that nobody else has.

peerData <- getPeerData PWPEvent ( Have ix ) <- getPeerEvent let oldBf = peerBitField peerData newBf = BF . set oldBf ix True peerData' = peerData { peerBitField = newBf } runMemory $ modifyAvailability $ addToAvailability newBf . removeFromAvailability oldBf updatePeerData peerData'

We prepare the updated version of the bitfield and make sure to save it in the global availability cache and in our local state.

Interested

A quick implementation for the Interested message is to simply unchoke the peer right away.

peerData <- getPeerData PWPEvent Interested <- getPeerEvent when ( amChoking peerData ) $ do emit Unchoke updatePeerData $ peerData { amChoking = False , peerInterested = True }

This one is quite self-explanatory.

One nice feature of this approach is that we can create rich operations out of smaller ones. It’s possible to store PeerMonad code with its most basic operations in some module, but build higher-level functions from it and export only them. We might get a nice abstraction level that’s easier to work with.

Real Evaluator

So far, we’ve built up a base upon which we can build meaningful logic. The only problem is that it does not quite do anything yet. Our compiler has no way of knowing what we meant defining our operations, so let’s teach it.

The way we’ve split up memory into shared and local makes an interesting thing possible: we can express all local memory mutations in a StateT monad. Since we insisted on handling all the events for a peer in a single thread, there’s no need to share it with anyone – we’re free to modify it whenever we like.

The ClientState structure houses references to shared memory and a few other things we need. It’s quite convenient that besides the shared memory, other variables are immutable. We can represent this with the ReaderT monad.

Our evaluator will take this form:

type PeerMonadIO = ReaderT ClientState ( StateT PeerState IO )

Now that we now the environment our PeerMonad will be evaluated in, we can write a function that takes all needed parameters and runs it.

There’s a lot of stuff going on below, so bear with me:

runPeerMonad :: ClientState -> PeerData -> [ PWP ] -> Handle -> F PeerMonad a -> IO a runPeerMonad state pData messages outHandle t = do chan <- newChan void $ forkIO $ traverse_ ( writeChan chan . PWPEvent ) messages sharedChan <- dupChan ( sharedMessages state ) void $ forkIO $ forever $ readChan sharedChan *> writeChan chan SharedEvent let peerState = PeerState pData chan outHandle evalStateT ( runReaderT ( inside t ) state ) peerState where inside = iterM evalPeerMonadIO

Remember the talk about utilising the I/O scheduler and making sure we do it in an evented style? This is when channels come to play. They are a simple concurrency primitive to pass messages. We first create a new channel that we’ll read from whenever our logic triggers getPeerEvent . Then, we connect the event sources to feed data into that channel.

We make sure to push forward any PWP message that appears on the wire.

We use dupChan which takes a global channel we have in ClientState and makes a copy of it. This means that every time someone wants to broadcast a message to all peer loops, he can simply write to the channel located in the global state and every peer loop will get a copy of it. Even if that global channel has some messages in the buffer, dupChan does not inherit them. And rightfully so: we don’t want to respond to messages that originated before we opened a connection to the peer. The semantics of channels fit right in our usecase.

which takes a global channel we have in and makes a copy of it. This means that every time someone wants to broadcast a message to all peer loops, he can simply write to the channel located in the global state and every peer loop will get a copy of it. Even if that global channel has some messages in the buffer, does not inherit them. And rightfully so: we don’t want to respond to messages that originated before we opened a connection to the peer. The semantics of channels fit right in our usecase. We assemble our PeerState structure of which we’ll modify only PeerData (with the booleans and whatnot).

structure of which we’ll modify only (with the booleans and whatnot). We execute the monad stack with read-only ClientState and mutable PeerState . The inside function just evaluates all operations of our free monad sequentially. It uses evalPeerMonadIO , a function that translates expressions from F PeerMonad to PeerMonadIO .

A cool trick I used here is the [PWP] argument. It does not mean that we only supply a few messages to our monad. This list is infinite and it’s been built up from an infinite, lazy input string:

messageStream :: BL . ByteString -> [ PWP ] messageStream input = case runGetOrFail get input of Left _ -> [] Right ( rest , _ , msg ) -> msg : messageStream rest

runGetOrFail always waits for just enough to either parse a value or return an error when parsing failed. We have to be careful around infinite strings and lists though: keeping a reference to the beginning would retain all of it!

So how does our evalPeerMonadIO look like?

evalPeerMonadIO :: PeerMonad ( PeerMonadIO a ) -> PeerMonadIO a evalPeerMonadIO ( RunMemory a next ) = do state <- ask res <- liftIO $ atomically $ runMemoryMonad state a next res evalPeerMonadIO ( GetPeerData next ) = do PeerState pData _ _ <- get next pData evalPeerMonadIO ( Emit pwp next ) = do PeerState _ _ handle <- get liftIO $ BL . hPut handle ( Binary . encode pwp ) next evalPeerMonadIO ( GetMeta next ) = do meta <- metaInfo <$> ask next meta evalPeerMonadIO ( ReadData o l next ) = do state <- ask let hdl = outputHandle state lock = outputLock state a <- liftIO $ FW . read hdl lock o l next a evalPeerMonadIO ( WriteData o d next ) = do state <- ask let hdl = outputHandle state lock = outputLock state liftIO $ FW . write hdl lock o d next evalPeerMonadIO ( UpdatePeerData pData next ) = do pState <- get put $ pState { peerStateData = pData } next evalPeerMonadIO ( GetPeerEvent next ) = do pState <- get msg <- liftIO $ readChan $ peerStateChan pState next msg

next represents the next action of our monad to execute. Sometimes it expects an argument when we want to push data to our monad, so we provide it.

We use ask to retrieve the value of our global state. get and put are used to access the local state. We do disk I/O in the same thread using the FW module, with a global lock for the time being.

Testing the evaluator

You might have noticed that separating base operations in this way makes it easy to test them independently. This practice is called decoupling and makes the code easier to test. It’s good to have it – just like purity.

If we were to test handling the Have message, we might have done something like this:

spec :: SpecWith () spec = do tmpdir <- runIO getTemporaryDirectory state <- runIO $ newClientState tmpdir testMeta 9999 let addr = testAddr [ 1 , 0 , 0 , 127 ] 9999 peerId = B . replicate ( fromEnum '1' ) 20 bf = BF . newBitField 4 pData = newPeer bf addr peerId describe "handlePWP" $ do describe "for Have message" $ do it "properly adjust the peer bitfield" $ do let exp = handlePWP ( Have 2 ) *> getPeerData pData' <- runPeerMonad state pData [] stdout exp BF . get ( peerBitField pData' ) 2 ` shouldBe ` True

We pass the core expression handlePWP (Have 2) and then ask for peer data to be returned after. We then assert that the bitfield (which starts with False values) was indeed changed.

To verify how PeerMonadIO implements basic operations, we might test it this way:

describe "PeerMonadIO" $ do describe "local state manipulation" $ do it "respects state updates" $ do let pData' = pData { amChoking = False } exp = updatePeerData pData' *> getPeerData returned <- runPeerMonad state pData [] stdout exp returned ` shouldBe ` pData'

Running the tests:

$ stack test --test-arguments '--format specdoc --match "PeerMonad"' PeerMonad handlePWP for Have message properly adjust the peer bitfield PeerMonadIO local state manipulation respects state updates Finished in 0.0262 seconds 2 examples, 0 failures

Test evaluator

Although testing the operations provided by PeerMonadIO the way we just have is great, we can do even better. It should be possible to develop a pure evaluator to test protocol logic without touching neither the network nor the disk! Furthermore, any test in which one peer connection is enough to make an assertion doesn’t need transactional memory either. We could run all of it in the StateT monad.

We can take it one step further and have a version of the pure evaluator that we can script dynamically. The supplied function would have access to every past message the peer loop under test emitted. Based on that, it could trigger some specific behaviors that we want tested.

That’s a stretch, but one that makes a lot of sense to me. Especially when the application grows and so does the number of extensions supported.

Closing thoughts

By using free monads, we’ve effectively abstracted our protocol logic away from an implementation. This gives us the freedom in how we evaluate it, but also makes it easier to structure that logic. Back when I was doing the IO approach, it was difficult to extract common pieces and reuse code. Now that we have free monads, we have base operations that we can build upon.