not-actually-meta-this-time

Last month one of the blogs I read posted a list of foundational books, a small number designated as “core frameworks.” I’m a collector of ideas and I take this sort of recommendation seriously. I hadn’t heard of this one (goodreads) before and it looked promising, so I bumped it to the top of my queue.

Notes starts off strong, with a chapter titled “The Need for Rationality,” which will appeal to certain types, myself included. An excerpt:

“While … a great deal of what is generally understood to be logic is concerned with deduction, logic… refers to something far more general. It is concerned with the form of abstract structures, and is involved the moment we make pictures of reality and then seek to manipulate these pictures so that we may look further into the reality itself.” (Notes p. 8)

This is solid epistemology, the map / territory distinction, an idea at least as old as the 1930’s, though probably much older. Here, logic consists of operations on the map (or perhaps the map includes both an object and a logic that can manipulate the object). The utility of logic then rests on the correspondence between map and territory.

The accomplished and iconoclastic architect Christopher Alexander wrote Notes in 1964. For a 51-year-old book about efficient search through arbitrary design space it’s held up surprisingly well, despite some idiosyncrasies (examples forthcoming). Alexander was clearly well-versed in the computational theory of the day, and if he makes some naive assertions regarding the tractability of what he terms “selection problems” he can be forgiven: Hartmanis and Stearns (1965), the foundational paper on computational complexity, had yet to be published.

The Processes

Alexander describes two approaches to design, the unselfconscious process practiced in traditional or “primitive” cultures and the selfconscious process of modern design. The difficulty of solving modern design problems arises from having to simultaneously satisfy dozens of potentially conflicting requirements. Furthermore this solution must be assembled from whole cloth. In contrast, the unselfconscious process deals with simpler contexts and proceeds gradually in the form of minor adjustments within the bounds of strong tradition.

Unselfconscious cultures are not automatically good at producing solutions. Indeed, they are fragile and can be disrupted by contact with selfconscious cultures:

“The Slovakian peasants used to be famous for the shawls they made. These shawls were wonderfully colored and patterned, woven of yarns which had been dipped in homemade dyes. Early in the twentieth century aniline dyes were made available to them. And at once the glory of the shawls was spoiled; they were now no longer delicate and subtle, but crude. This change cannot have come about because the new dyes were somehow inferior. They were as brilliant, and the variety of colors was much greater than before. Yet somehow the new shawls turned out vulgar and uninteresting. Now if, as it is so pleasant to suppose, the shawlmakers had had some innate artistry, had been so gifted that they were simply “able” to make beautiful shawls, it would be almost impossible to explain their later clumsiness. But if we look at the situation differently, it is very easy to explain. The shawlmakers were simply able, as many of us are, to recognize bad shawls, and their own mistakes. Over the generations the shawls had doubtless often been made extremely badly. But whenever a bad one was made, it was recognized as such, and therefore not repeated. And though nothing is to say that the change made would be for the better, it would still be a change. When the results of such changes were still bad, further changes would be made. The changes would go on until the shawls were good. And only at this point would the incentive to go on changing the patterns disappear. So we do not need to pretend that these craftsmen had special ability. They made beautiful shawls by standing in a long tradition, and by making minor changes whenever something seemed to need improvement. But once presented with more complicated choices, their apparent mastery and judgment disappeared.” (Notes pp. 53–4)

The quality of Alexander’s anthropology might be questioned, and a modern author would undoubtedly put things a little differently. On the other hand there are some real gems here, particularly on the subject of learning.

Alexander divides learning into two categories: one where you’re not told the rules and you learn by trial and error (and imitation), guided by reinforcement and punishment, and one where you are given rules to follow.

The second kind of learning requires teachers, people with authority.

“These teachers… have to condense the knowledge which was once laboriously acquired in experience, for without such condensation the teaching problem would be unwieldy and unmanageable. The teacher cannot refer explicitly to each single mistake which can be made, for even if there were time to do so, such a list could not be learned…. the teacher invents teachable rules within which he accommodates as much of his unconscious training as he can — a set of shorthand principles.” (Notes p. 35)

These explicit rules are lossy, whereas models learned through trial and error are incredibly rich and detailed, and fit the territory better than explicitly learned models. While comparatively easy to communicate, explicit models need their lacunae filled in through experience before they become useful. Intuitive trial-and-error learning tends to rely heavily on context, is difficult to transmit, and may even present barriers to introspection and self-awareness.

There does seem to be an assumption here that since our society is capable of selfconscious architecture that all construction is selfconscious and not stamped out from a template. This is untrue even in the typical case: the little boxes made of ticky tacky all look the same (incidentally the song was popularized in 1963, the year before Notes was published). A salient difference between ticky tacky houses and Mousgoum huts, mentioned in passing, is that modern housing developers don’t live in the houses they build. Many of the “misfits” of modern mass housing can be attributed to this misalignment of incentives.

Alexander describes a method that adapts the working parts of the unselfconscious process and applies them to modern complex design problems. What are these working parts? We’ll get there with the help of a little 1950’s neuroscience, but first let’s lay some groundwork.

Misfits

How do we know when we’ve solved a design problem? While it’s common to talk about satisfying a set of requirements, Alexander emphatically puts this in terms of misfits: does the design fit into its context (and indeed where do you draw the boundary between form and context)? A problem is solved when all misfits are eliminated:

“It is common practice in engineering, if we wish to make a metal face perfectly smooth and level, to fit it against the surface of a standard steel block… by inking the surface of this standard block and rubbing our metal face against the inked surface. If our metal face is not quite level, ink marks appear on it at those points which are higher than the rest. We grind away these high spots…. The face is level when it fits the block perfectly, so that there are no high spots which stand out any more.” (Notes p. 19)

In any reasonably complex problem these misfits are interrelated — mediated by common considerations — and a design must make appropriate tradeoffs when misfits conflict. For example, a kettle needs to heat up quickly yet keep its water hot for a sufficient period of time, and these requirements (i.e. potential misfits) depend on the properties of the material used in construction. Perhaps the most common design tradeoff is that of expense: the lower bound on quality of materials conflicts with the upper bound on price of the resultant product.

The unselfconscious process produces good fit slowly through selective pressures, yet we don’t have the luxury of waiting generations to solve modern problems, and many problems are simply too complex to be solved by blind search.

Sets of interdependent misfits can be thought of as a graph, a now-common data structure. Alexander observes that when the graph consists of loosely interconnected clusters of strongly interconnected misfits we have some chance of finding a solution. When the graph is fully connected — every misfit affects every other misfit — solutions are nearly unobtainable.

Design For a Brain

This should make intuitive sense, but in case you need justification, imagine a brain. Or a simplistic toy model of a brain with 100 neurons, each of which may be active or inactive. As time progresses, say every hundredth of a second, active neurons may become deactivated with probability ½ and inactive neurons that are connected to active neurons may be activated with probability ½.

The behavior of our toy brain model depends on the nature of its interconnections. In a sparsely connect brain even a strong activation quickly dies out, while in a fully connected brain even a single active neuron will cause vigorous activity lasting, for all practical purposes, for all time. Curiously, if the neurons are divided into 10 clusters, each fully connected, neural activity will reliably peter out after a reasonable amount of time.

This resonates with the idea of modularity in software engineering. Well-designed systems have cohesive modules, loosely coupled.

Acoustics and Other Faulty Categories

Alexander spills a great deal of ink arguing that many concepts of design (“economics,” “acoustics,” “neighborhood”) don’t correspond to modular divisions — they don’t carve nature at the joints. In a footnote Alexander clarifies,

“It could be argued possibly that the word “acoustics” is not arbitrary but corresponds to a clearly objective collection of requirements — namely those which deal with auditory phenomena. But this only serves to emphasize its arbitrariness. After all, what has the fact that we happen to have ears got to do with the problem’s causal structure?” (Notes footnote 5.17 p. 205)

The language we use, which revolves around how we perceive our environment, inevitably influences how we separate concerns. A single design decision, say the choice of material or geometry of a wall, might impact a room’s acoustics, heating, and economy of construction — these supposed concepts are entangled in a causal sense.

Another example, particularly germane to the problem of ensuring the safety of complex systems:

“Take the concept “safety,” for example. Its existence as a common word is convenient and helps hammer home the very general importance of keeping designs danger-free. But it is used in the statement of such dissimilar problems as the design of a tea kettle and the design of a highway interchange…. as far as the individual structure of the two problems goes, it seems unlikely that the one word should successfully identify a principal component subsystem in each of these two very dissimilar problems.”

These arguments notwithstanding, I feel compelled to defend the validity of acoustical diagrams. Such diagrams represent general needs, clusters of requirements that do carve reality at the joints in a certain way. It’s hard to imagine how we can build a structure that satisfies the needs of human existence, the need for fresh air, for instance, without understanding how air might flow in the structure as designed. This is true regardless of how many design decisions impinge on the dynamical system.

Where Alexander is concerned with how our language influences where we draw conceptual boundaries, in my opinion he seems unduly influenced by linguistic determinism.

The Algorithm

The gory details of the process, what I’m calling an algorithm, are best left to the book. I’ll attempt to describe it in broad strokes.

The first observation is that most real-world problems can be divided into smaller problems that can be solved individually. To accomplish this Alexander proposes making a list of all possible misfits (or requirements if you prefer) and identifying links between them. The links are to be undirected and may be weighted +1 for concurrence or -1 for conflict. Identifying which misfits are linked is certainly a hard problem, and while they might be discovered by observing correlations among successful forms already in existence, Alexander suggests that this is not what we want, nor is it practical to exhaustively search design space. Which I would argue is not what we want either.

“Instead of just looking for statistical connections between variables, we may try to find causal relations between them. Blind belief based only on observed regularity is not very strong, because it is not the result of a seen causal connection. But if we can invent an explanation for inter-variable correlation in terms of some conceptual model, we shall be much better inclined to believe in the regularity, because we shall then know which kinds of extraneous circumstances are likely to upset the regularity and which are not. We call a correlation “causal” in this second case, when we have some kind of understanding or model whose rules account for it…. We shall say that two variables interact if and only if the designer can find some reason (or conceptual model) which makes sense to him and tells him why they should do so.” (Notes p. 109, emphasis in the original)

Now we’re getting somewhere. It seems to me that Alexander has been grasping at an idea up to this point, dancing around with correlations and sample spaces, and only now coming to something like a solid foundation. The links in the graph don’t quite seem real, in the sense that the underlying causal structure seems more fundamental, so why invoke the links at all?

A better representation would be to identify a set of causal variables (which we can think of as design decisions) along with directional links to the misfit variables (the requirements). This may be equivalent to the formulation of undirected links plus causal models, but to me feels more succinct.

The algorithm itself is of the recursive divide-and-conquer type, familiar to CS majors everywhere (if you’ve ever looked up a name in a phone book you’ve performed binary search, the simplest of the divide-and-conquer algorithms). Once the links are determined we identify small sets of misfits that are heavily interconnected, then treat the set as a single misfit. In this way we build a hierarchical decomposition of the problem.

A problem decomposed in this way can be solved piecemeal by building a constructive diagram, a unified description that nails down a form’s formal description (“what it is”) and its functional description (“what it does”) — for an example see Notes p. 88. This is the true work of the designer, while the specification, the denotation of the boundary between form and context, the determination of links and causal models, and the decomposition are mere preliminaries.

Where is The Core?

Let’s consider some problems that may seem unrelated: designing a lesson plan, installing a software package, and producing a new theatrical production. You may want to take a few minutes to think about what these activities have in common.

Ready?

What I have in mind is the notion of dependency, a sort of this-before-that. When teaching a complex subject it’s important to begin with fundamental building blocks and only then move on to applications that make use of these ideas. Software installers, at least well designed ones, will only install modules that are necessary for the desired functionality and take into account what is already installed. The work of putting on a theatrical production must be done in a certain order: first the script must be written, then the cast must be assembled, then the costumes made and the actors learn their lines, and only then can opening night take place.

Each of these activities consists of many steps, some of which depend on others. Taken together they form a dependency graph. If we are to hope to complete each program, there cannot be circular dependencies: the graph must be with acyclic. Steps whose dependencies are satisfied might be worked on in parallel, provided the resources are available. An ordering of steps in a directed acyclic graph is known as a topological sort, efficient algorithms for which were discovered in the late 50’s.

This is a core framework. Multi-step situations with interdependencies show up over and over in real-world problems, and knowing that a solution is easy to calculate given a description of the graph is empowering and clarifying. Complex corpuses of knowledge can be encoded and transmitted efficiently once a little bit of internal structure is worked out. In a very real sense, modern society runs on dependency graphs — Alexander’s process feels unwieldy by comparison.

I don’t mean to detract from Alexander’s contribution, which is especially impressive considering the state of the art of computational theory at the time. There are a lot of good ideas here. Skip the mathematical details (unless that’s your thing), don’t take any of it too seriously, and enjoy the flow of ideas.