The moment came about two years ago, in the middle of a lecture on biologically-inspired algorithms (EC, ant-colony optimization, etc.) My attention had strayed so very briefly - and yet the material immediately ceased to make sense. It seemed obvious that continuing to follow the proof on the board was futile - the house of cards inside my head had tumbled down. Dark thoughts came: if my train of thought is so easily derailed, what am I doing in the thinking business? The answer "nothing else has come remotely close to making me happy" won't pay the bills. Floor, ceiling, and proofy whiteboard swirled together as I continued in this misery. It was then that I suddenly realized exactly what had led me to pick up programming when I was young, and to continue studying every aspect of computing I could lay my hands on. It was the notion that a computer could make me smarter. Not literally, of course - no more than a bulldozer is able to make me stronger. I thirsted for a machine which would let me understand and create more complex ideas than my unassisted mind is capable of, in the same way that heavy construction equipment can let mediocre biceps move mountains.

What, exactly, has the personal computer done to expand my range of thinkable thoughts? Enabling communication doesn't count - it makes rather light use of the machine's computational powers. From the dawn of programmable computing, AI has been bouncing back and forth between scapegoat and media darling, while IA steadily languishes in obscurity. It is very difficult to accurately imagine being more intelligent than one already is. What would such a thing feel like? It is easier to picture acquiring a specific mental strength, such as a photographic memory. The latter has been a fantasy of mine since early childhood. Improving long-term memory would give me a richer "toybox" for forming associations/ideas, whereas a stronger short-term memory might feel like an expanded cache.

Before the lecture was through, I had formed a very clear picture of the mythical photographic memory simulator. With only a few keystrokes, it would allow me to enter any thought which occurs to me, along with any associations. The latter set would be expanded by the program to include anything which logically relates to the entry in question. As the day went on, the idea became less and less clear in my mind, until what once appeared to be a thorough understanding of the problem and its solution had mostly vanished. All that remained was a set of clumsily scribbled notes.

Later, I discovered that the idea was not new at all. This did not surprise me. What surprised me was the fact that none of the attempted solutions had caught on. What I found ranged from the hopelessly primitive to the elephantine-bloated, with some promising but unfinished and promising but closed-source ones mixed in. None of the apps featured expandability in a homoiconic language, even by virtue of being written entirely in one. From my point of view, the lack of this feature is a deal-killer. I must be able to define programmatic relationships between datums, on a whim - plus new sytaxes for doing so, also on a whim. There must be no artificial boundary between code and data, for my thoughts are often best expressed as executable code - even when entirely unrelated to programming.

Thus I began work on Skrode - named after the combination of go-cart and artificial short-term memory used by a race of sentient plants in a well-known novel. The choice of languages came down to Common Lisp vs. Scheme, as the Lisp family is the only environment where I do not feel caged by a stranger's notions of what programming should be like. I've always felt CL to be ugly and bloated with non-orthogonal features. Scheme, on the other hand, is minimal to the point of uselessness unless augmented with non-standard libraries. Neither CL nor any of the existing Schemes seemed appetizing. What I needed was a Lisp system which I could squeeze into my brain cache in its entirety - thus, practically anything written by other people would not fit the bill.

By this time, I had come to believe that every piece of information stored on my computer should be a first-class citizen of the artificial memory. The notion of separate applications in which arbitrarily divided categories of data are forever trapped seemed more and more laughable. Skrode would have to play nicely with the underlying operating system in order to display smooth graphics, manage files, and talk TCP/IP. Thus I set to work on a Scheme interpreter, meant to be as simple as possible while still capable of these tasks. This proved to be a nightmarish job, not because of its intellectual difficulty but from the extreme tedium. None of the mature cross-platform GUI libraries play nicely with the Lispy way of thinking about graphics. (Warning: don't read Henderson's paper if you are forced to write traditional UI code for a living. It might be hazardous to your mental health.) I learned OpenGL, and found it to be no solution at all, for the same reasons.

My concept of the system expanded from that of a hierarchical notebook to a complete programming system. Yet I knew that it could not satisfy me if it contained any parts not modifiable from within itself. Emacs would stand a chance, were it not for its C-language core. Then I discovered that system-level reflection is not an unreasonable demand. I found out that instant documentation and source access for any object known to the user/programmer is a reasonable thing to expect. That it is possible to halt, repair, and resume a running program. And, depressingly, that one could do these things on 1980s vintage hardware but not on modern machines.

The wonders of the Lisp Machine world and the causes of its demise have been discussed at great length by others. The good news is that CPU architectures have advanced to the point where the fun can start again, on commodity hardware. I have worked out some interestingly efficient ways to coax an AMD Opteron into becoming something it was never meant to be.

Skrode remains in cryostasis, and awaits the completion of Loper - an effort to (re)create a sane computing environment.

Learning an API does not have to feel like dealing with a Kafkaesque bureaucracy.

And that is enough hot air for the time being. The next post will speak only of usefulness. Such as how one might build an OS where yanking and re-inserting the power cord results in no tears. And other outlandish yet very practical things.