A number of people have asked some very good questions about the details of how we teach certain concepts in our new functional programming class for freshmen at Carnegie Mellon. Rather than spray responses among the various comments, I’ll summarize a few major points here in hopes of helping others who may wish to teach a similar course. So this post is not really meant for a broad audience, but rather for the specialist; feel free to skip it if it seems too focused for your interests. I promise to write some more controversial material of more general interest soon! Meanwhile, here are a few thoughts presented in no particular order of importance.

Because the class is intended for beginners, we start from scratch with a language-based model of computation. This means that, with one regrettable misstep on our part, we never talk about extra-linguistic concepts like “run-time stacks” or “compilers.” The students are taught to think in terms of the language itself, and to reason directly about both the correctness and efficiency of the code they actually write, not the code that it allegedly compiles to or translates to. One beautiful feature of the language-based approach is that we start with a very familiar model of computation, the evaluation of polynomials over the reals. It’s very familiar for all students, and I think they find it very satisfying precisely because it has a direct computational flavor. You plug in for variables and simplify, and out comes the answer. We can draw on this as our starting point; programs are just generalized polynomials. In particular, in a functional language variables are variables: the mathematical concept of variables, which is given meaning by substitution, is precisely the programming concept of variable. It’s not analogous, it’s the same. So we can draw on their experience and ask them to prove things about programs using methods that build directly on what they already know. It’s extremely natural, and very beautiful, and leads easily to an integrated view of mathematics and programming. Moreover, it’s a level playing field. Students with prior “programming experience” are, if anything, at a disadvantage, because they think they know things that are either wrong or inapplicable. One consequence is gender equity, because even with American society being what it is, the women have no particular disadvantage with respect to the men when it comes to our style of thinking and programming. It’s a win-win-win situation.

Moving to a more technical level, the use of structural operational semantics is indispensable for providing a rigorous foundation for understanding program execution, reasoning about program correctness, and for defining a cost model to support reasoning about asymptotic complexity. There is no substitute for this! Without a crisp formulation of the semantics of the language, it is impossible to discuss any of these issues in a meaningful and technically precise way. With it you can readily resolve any questions about “what happens if …”, giving the students a tool that they can use themselves to answer such questions. Moreover, as program verification becomes more important in industrial practice, as well as academic research, it is essential that students be educated in the tools of semantics. Structural operational semantics is very easy to teach, and presents no problems for the students. We just use it, and they get it without any fuss or bother. It is a natural extension of their experience with high school algebra. Be not afraid of using these tools to teach programming!

As I’ve explained previously, it is a very good idea to avoid Booleans as much as possible. And, above all, don’t mention equality! The equals sign in programming languages is not the equals sign of mathematics. Propositions are not Booleans, and it only confuses matters to use notations that encourage this misconception. Related to this, avoid if-then-else entirely, and instead use only case analysis for branching, even when the value to be discriminated is a Boolean. We consistently write things like

case Int.compare(x,y) of LESS => ... | GREATER => ... | EQUAL => ...

rather than a nested conditional branch. It encourages students to think in terms of pattern matching, and prepares the ground for later developments, including a smooth transition to pattern matching over more complex data structures and reasoning inductively when programming recursively.

Teaching parallelism is completely straightforward, because the model of computation inherently avoids unnatural and unnecessary problems of interference, and focuses attention on the critical issue of data dependencies among computations in a program. Students have no trouble computing the work (sequential time complexity) or span (parallel time complexity) of a program, and have no problems reading off recurrences for the respective time complexities. Later, when we introduce sequences, the idea of computing in parallel with the entire sequence, rather than item-by-iterm (as encouraged by the dreadful iterators so beloved in the oo world), comes naturally and easily. The key to this, of course, is that data structures in a functional language are naturally persistent; it is monstrously hard to use ephemeral data structures in a parallel computation, and is not something we could hope to teach freshmen.

A major decision for us is how to teach the expression and enforcement of abstraction in a program. In a departure from our previous approach, we have decided against using opaque ascription (sealing) as a means of enforcing abstraction. It has its virtues, but the problem is that it does not mesh well with other language features, in particular with substructures and type classes (views). For example, consider the signature of a mapping whose domain is an ordered type of keys:

signature MAPPING = sig structure Key : ORDERED type 'a mapping val lookup : Key.t -> 'a mapping -> 'a ... end

Unfortunately, sealing a structure with this signature renders the module useless:

structure IntMapping :> MAPPING = struct structure Key = Int type 'a mapping = 'a bst ... end

The trouble is that not only is the type ‘a IntMapping.mapping abstract, as intended, but so is IntMapping.Key.t, which is not at all intended! To get around this we we must create a specialization of the signature MAPPING using one of several means such as

signature INT_MAPPING = MAPPING where type Key.t=int structure IntMapping :> INT_MAPPING = ...

Of course one need not name the signature, but this illustrates the general problem. As things get more complicated, you have more and more clauses that specify the types of things (sharing specifications).

The alternative, which has worked very well for us, is to eschew opaque ascription, and instead rely on the datatype mechanism to make types abstract. So to give an implementation of the abstract type of mappings with keys being integers, we proceed as follows:

structure IntMapping : MAPPING = struct structure Key : ORDERED = Int datatype 'a bst = Empty | Node of 'a bst * (Key.t * 'a) * 'a bst type 'a mapping = 'a bst val insert = ... end

The point is that since the constructors of the type ‘a bst are not exported in the interface, the type ‘a IntMapping.mapping is abstract. Note as well that the use of transparent ascription on the structure Key ensures that keys really are integers (of type Int.int), and are not abstract, exactly as intended. This formulation allows us to state simple rules of signature matching (every specification in the signature has a corresponding declaration in the structure), and allows us to enforce abstraction boundaries with a minimum of fuss. The students have had absolutely no trouble with this at all, and we have had no trouble structuring our code this way.

When using functors (parameterized modules) to combine modules it is, of course, necessary to impose sharing constraints to ensure that only coherent compositions are possible. (Rather than take the space to explain this here, I will rely on the reader’s experience to understand what is involved here.) These sorts of sharing specifications are perfectly natural, easily explained, and have presented absolutely no difficulties for the students. We illustrated their use in our game tree search example, in which the “referee” module is parameterized by the two “player” modules, which must of course cohere on their concept of a “game” (it’s no use pitting a chess player against a checkers player!). The code looks like this

functor Referee (structure Player1 : PLAYER and Player2 : PLAYER sharing type Player1.Game.t = Player2.Game.t) : REFEREE = ...

The sharing specification states precisely and concisely the natural coherence constraint governing the two players. Here again, the dog we feared never barked, and the students found it all quite intuitive and unproblematic. This allowed them to expend their time on the actual complexities of the problem at hand, such as how to think about alpha-beta pruning in a parallel game-tree search, rather than get all tied up with the bureaucracy of structuring the code itself.

The virtue of teaching bright young students is that they are bright and they are young. Their brilliance is, of course, a pleasure. We have to work hard to come up with sufficiently challenging exercises, and many students challenge us with their solutions to our problems. Their youth means that they come to us with a minimum of misconceptions and misinformation that they’ve picked up on the street, and are open to learning methods that are entirely non-standard (at least for now) with respect to what their friends are learning at other universities. What makes Carnegie Mellon a special place is precisely that the students are pushed into thinking hard in ways that they might not be. Personally, I hope that more universities worldwide build on what we have started, and give their students the same benefits that ours are already enjoying.

Share this: Email

Print

Facebook

Reddit

Twitter

More

Tumblr

LinkedIn



Like this: Like Loading... Related