Last Wednesday, I went to a Bay Area Functional Programmers talk to see Philip Wadler give a talk on his paper Well-typed Programs Can't be Blamed Wadler was a good speaker. He was very friendly and interactive. He insisted on having people ask lots of questions. He called himself an "absent minded professor". Two things I didn't know about him were that he worked at Bell Labs and he also worked at Sun on Generic Java.Having tried to cram technical papers into my head way too late at night, I was pleased to find out that he's pretty easy to understand in person. However, his slides weren't always bug free. There were a couple of errors in the equations. Apparently, academic papers can have bugs too ;)The goal of the paper was to to mix typed and untyped modules in the same program. He admitted that untyped code lets you do fun and flexible things. His goal was to "coerce" a dynamic function to a typed function dynamically. This was done with a "contract", which is like a type declaration for a function that get applied dynamically.I mentioned to him that Python 3000 had type annotations, and that Guido had envisioned that someone would come along and do type assertions in very much the same way contracts work. He admitted that people were using these techniques in the real world. In fact, the new version of JavaScript will have optional types. I forgot to bring up that Visual Basic had optional types years ago.It's interesting to note that several people in the audience were interested in using Python and Haskell together in the same program. By the way, rather than calling Python an "untyped" language (which is clearly an insult to its dynamic nature), he called it a "unityped" language, because, basically, everything is an object.A core theme of the paper was "the blame game": is it the function's fault, or the callers fault? Apparently, this is a normal part of talking about function semantics, but I had never heard it referred to as that before.He extended his system to cover dynamic type systems (like Lisp), strict type systems (like Haskell), and super type systems (aka dependent type systems ) all in the same program. A dependent type system lets you have types such as "int n where n > 15". Consider the ramifications of this. If you have two such numbers, and you subtract one from the other, you might end up with an int that doesn't match that constraint. Often, type constraints such as these must be enforced at runtime. What you end up with isn't so different from design by contract.One interesting point he made is that the stronger your type system is, the braver you can be in relying on it. In a dynamic system, you might stick with simpler code because there's no compiler to catch your mistakes before running the code. This is an interesting counter-point to Bruce Eckel's argument that declaring your types in Java is a waste of time that prevents you from doing more useful work like writing unit tests. After all, you need to write unit tests anyway, since a type system can only catch a certain class of bugs.Wadler admitted that even Haskell's type inferencer wasn't smart enough to figure out some of the things he wanted to do. He said that he felt himself sometimes wishing he could just tell the compiler to fall back to checking his types at runtime.Someone in the audience was able to state that Wadler's conclusions applied equally well to languages that have assignment. His response was "Oh, that's too bad" ;) Apparently, that was a quote from some other computer scientist who was told that his research applied equally well to languages with a goto statement ;)There was a point thirty minutes into the talk where some people started to get lost. However, there were a few people that understood everything he said and wanted more. I understood about 60% of what he wrote and 75% of what he said. I think that says a lot about how good a speaker he was.His talk was very proof oriented. He thinks of himself as a mathematician. He brought up the fact that lambda calculus is isomorphic to proof reduction: "Programming isn't so arbitrary". This flew in the face of some of the things I've written, including Everything Your Professor Failed to Tell You About Functional Programming and Computer Science: What's Wrong with CS Research In thinking about interpreter design, I like to focus on what is useful, without regard to any similarities in the math world. (By the way, I am a mathematician too.) I have a feeling that computer science academics get too caught up in the math. Nonetheless, clearly, applying mathematical constraints can often be useful, and I found Wadler's point of view interesting.Scala is a functional language, similar to Erlang, built for the JVM. Someone asked Wadler if he had looked at Scala yet. Clearly, Wadler has a functional bias ("It's what I prefer to work on."), and one big difference I discovered between Haskell and Scala is that Scala lets you interact with stateful Java libraries. After all, Java is very stateful. I could see how that would be a big turnoff for Wadler. Wadler's response was that clearly there's a need for state. A database of some kind is needed to hold long term state. He seemed happy with what Erlang had done with Mnesia. However, he felt that state was a "high level concept" and that it's better to keep low-level programming and concurrent programming stateless.Agree or disagree, I definitely think Wadler's an interesting guy, and I'm glad I was able to go to his talk.