I remember when, a long time ago, Logic Programming was just starting out. The logic programmers would go to the functional programming gatherings and hang around the sidelines, hoping to convince everyone that logic programming was a kind of functional programming worthy of attention.

Then came the Japanese “Fifth Generation” project (which was based on Prolog) and the situation was reversed. Functional programmers would loiter in the hallways at Logic Programming conferences trying to convince everyone that functional programming was a kind of logic programming worthy of attention.

Neither side ever succeeded, partly because of serious technical difficulties. Functional programming is deterministic; a program has only one answer. Adding nondeterministic operators (like McCarthy’s amb(a,b), which returns either a or b) messes up the least fixed point semantics. Attempts to handle this with a “power domain” construct never really worked.

On the other side, Prolog had no obvious way to implement streams, which functional programming can handle with infinite lists (similar to the way Lucid does it). Adding infinite terms to Prolog messes up the minimum (Herbrand) model semantics. The problem is that you cannot calculate infinite objects incrementally, with backtracking. No matter how far you get, you can never be sure the backtracking won’t come all the way back. You can never release any output with confidence.

Fortunately Lucid does not use infinite lists to represent streams, it uses time-varying (indexed) variables. In the old days functional programmers thought this was a trivial difference, just a matter of syntactic sugar. But it turns out to be a difference that made all the difference.

The idea is to use time-varying predicates; to use temporal Horn logic. Temporal Horn Logic Programming (THLP) looks like ordinary Horn Logic Programming except that the temporal operators first and next can be applied to atomic formulas (not terms). Here’s a simple example

first i(0).

next i(s(X)) :- i(X).

This axiomatizes a time-varying predicate i that at time t succeeds for the term s(s(…s(0)…)) where there are t s‘s. This is the THLP program that corresponds to the Lucid program

I = 0 fby I+1

Here is a more elaborate one for the stream of Fibonacci numbers

first fib (0).

first next fib(1).

next next fib(K) ← next fib(M), fib(N), K is M + N. Suppose we have a predicate light that at any time succeeds with either red or green. The following keeps a count of the colors first count(0,0).

next count(s(X),Y) :- count(X,Y), light(green).

next count(X,s(Y)) :- count(X,Y), light(red). There are two (solvable) problems with THLP as described so far. The first problem is that we cannot define ‘filters’ whose output rate is different from their input rate. This can be solved using higher-order logic programming which I will discuss in a future post. The second problem is that pure THLP does not allow the programmer to make arbitrary but definite choices. The problem first presented itself in connection with input/output. Suppose we want to write a program whose output increases by one or two at each time step. We can do output by having a special predicate output axiomatized by the programmer. In the simplest case, at time t output would succeed for only one ground term, which would form the output. But why impose the constraint that output be single valued? We could use the convention that the output at time t is any one of the ground terms for which output succeeds. The problem then is that if we want the next output to be the current output plus one or two, we have to know which ground term was chosen for the current output. This requires a language extension. The extension takes the form of a single valued companion predicate !output which succeeds for the value chosen. In fact, since data flow programs work by internal I/O, every predicate needs a companion (though we won’t normally use all of them). Our program then becomes first output(0).

next output(s(X)) :- !output(X).

next output(s(s(X))) :- !output(X). The amazing fact is that all of this still works as logic – in particular, programs have minimal models. My Ph.D. student Mehmet Orgun (now at Macquarie in Australia) proved all this and more. He studied intensional logic programming in general and found conditions on intensional operators that allow them to appear on the right and left sides of rules. He continued this research and it has also been taken up by another (former) student, Panos Rondogiannis (now at the University of Athens in Greece). I’ll have more to say about this in future posts.