Rob Pike did a great talk at Google on concurrency in Newsqueak. The gist of it is that you spawn concurrent tasks in separate lightweight processes and synchronize by communicating through channels. That stuff worked its way into the programming language Go. After seeing that video I wanted to try it out myself in Common Lisp, of course. It turns out there is a Quicklisp-installable library called ChanL that provides just that functionality. I don’t know how lightweight the threads are, but the interface is similar.

A simple, but effective example of using channels is to perform a number of IO-bound tasks in parallel, such as retrieving several web pages. It’s surprisingly simple to do in parallel. To start, I’ll define a little timing macro:

( defmacro time-it ( &body body ) ( let (( start-time ( gensym ))) ` ( let (( , start-time ( get-internal-real-time ))) ,@ body ( format t "Runtime: ~a milliseconds.~%" ( - ( get-internal-real-time ) , start-time )))))

Then, let’s load the ChanL library and Drakma (for simple HTTP requests) using quicklisp:

( ql:quickload "drakma" ) ( ql:quickload "chanl" )

Our example set of URLs:

( defparameter *urls* ( list "http://blog.thezerobit.com/" "http://quicklisp.org/" "http://www.cliki.net/index" "http://sbcl.org/" ))

Now, here’s a function that performs a single HTTP request and times it.

( defun do-request ( url ) ( let (( start-time ( get-internal-real-time ))) ( format t "Starting request: ~a~%" url ) ( drakma:http-request url ) ( let (( elapsed ( - ( get-internal-real-time ) start-time ))) ( format t "Completed request in ~a ms: ~a~%" elapsed url ))))

This is what you might do without concurrency if we wanted to perform these HTTP requests:

( time-it ( dolist ( url *urls* ) ( do-request url ))) ;; Starting request: http://blog.thezerobit.com/ ;; Completed request in 616 ms: http://blog.thezerobit.com/ ;; Starting request: http://quicklisp.org/ ;; Completed request in 819 ms: http://quicklisp.org/ ;; Starting request: http://www.cliki.net/index ;; Completed request in 429 ms: http://www.cliki.net/index ;; Starting request: http://sbcl.org/ ;; Completed request in 291 ms: http://sbcl.org/ ;; Runtime: 2155 milliseconds.

It takes over 2 seconds to make all these requests in serial fashion. There may be a way to queue up these requests in parallel with the Drakma library, but instead we’ll just use the ChanL library. Let’s define a function, like DO-REQUEST that instead of printing its progress will send messages over a channel.

( defun do-request-chan ( url chan ) ( let (( start-time ( get-internal-real-time ))) ( chanl:send chan ( format nil "Starting request: ~a~%" url )) ( drakma:http-request url ) ( let (( elapsed ( - ( get-internal-real-time ) start-time ))) ( chanl:send chan ( format nil "Completed request in ~a ms: ~a~%" elapsed url )))))

We can use the PEXEC call to spawn separate processes for each HTTP request and then wait for the 8 messages to come in on the channel we create to synchronize.

( time-it ( let (( chan ( make-instance 'chanl:channel ))) ( dolist ( url *urls* ) ( chanl:pexec () ( do-request-chan url chan ))) ( dotimes ( x 8 ) ( format t ( chanl:recv chan ))))) ;; Starting request: http://blog.thezerobit.com/ ;; Starting request: http://quicklisp.org/ ;; Starting request: http://sbcl.org/ ;; Starting request: http://www.cliki.net/index ;; Completed request in 291 ms: http://blog.thezerobit.com/ ;; Completed request in 302 ms: http://www.cliki.net/index ;; Completed request in 306 ms: http://sbcl.org/ ;; Completed request in 703 ms: http://quicklisp.org/ ;; Runtime: 703 milliseconds.

Here, the whole operation takes only as long as the longest single request and the overhead of spawning separate threads is negligible compared to the time saved by making these requests concurrently. It is important to note that, by default, the channels are not buffered so calls to SEND will block until there’s a RECV called on the same channel in a different thread. ChanL also provides buffered channels and some other goodies which I haven’t touched on here.