Dialogical logic programming

Game semantics interpret logic as games played between a verifier and a falsifier. We can use this with proof-search-as-computation as foundations for interactive and concurrent logic programs.

Example

We can already write interactive programs in Prolog, however there are issues. They can be illustrated by a simple program that asks for user's name.

ask_name :- write('What is your name? '), read(Name), write('Hello '), write(Name), nl.

If you save this text as "bad_hello.pro" , you can have the following query:

?- consult('bad_hello.pro'). ?- ask_name. What is your name? foo. Hello foo true.

Everything seems all right here, except that we've stepped outside from logic programming. To illustrate I flip the statements around:

psychic_ask_name :- write('Hello '), write(Name), nl, write('What is your name? '), read(Name).

This should be acceptable as conjunction is commutative. Add the text into "bad_hello.pro" , reconsult the file and do the query:

?- psychic_ask_name. Hello _L155 What is your name? wat. true.

Good try, Prolog!

The original program is not logical to begin with. Prolog is abusing sequentiality in proof search to provide an additional imperative programming model into Prolog.

Additive conjunctive quantifiers allow representation of the same programs logically:

ask_name :- (∏ write 'What is your name? ') (∏ read Name) (∏ write 'Hello ') (∏ write Name) (∏ nl) true. psychic_ask_name :- (∏ write 'Hello ') (∏ write Name) (∏ nl) (∏ write 'What is your name? ') (∏ read Name) true.

Prolog interpreter cannot run this program as it doesn't have semantics for positive additive quantifiers. It's a seemingly minor change, but the semantics of the program are completely different. Here's what should happen:

?- ask_name. true. What is your name? foo. Hello foo ?- psychic_ask_name. false.

Proof search looks for a strategy to "win" the game that the goal represents. Technically in the first example we're trying to prove something like ∀x∃y.x=y . In the second example we're trying to prove the opposite ∃x∀y.x=y .

The interpreter refuses to run the later program as it cannot find a strategy where it would "win the game" that psychic_ask_name is a model of.

To make this all work, the logical formulas are interpreted as games played between a computer and its environment. We can also model ordinary games this way but the results aren't immediately interesting.

If you model the chess and ask the interpreter to play it, it will get stuck on the search for a strategy to win the game.

If you ask a game of tic-tac-toe, it may finish but won't play because it cannot find a way to win the game with certainty against any opponent. However there exists a perfect play of tic-tac-toe that leads to draw or win that may be found by loosening the constraint that the computer must win the game.

This is also a form of game that the computer cannot win with certainty. A game of paper-rock-scissors can be modeled with:

paper_rock_scissors :- (∐ X ∈ [paper, rock, scissors]) (∏ Y ∈ [paper, rock, scissors]) X = scissors, Y = paper; X = paper, Y = rock; X = rock, Y = scissors.

Note that by representing it this way we do not directly model that both players move simultaneously. Instead, there's an intermediary that first gets both players choice before handing out the results.

Applying these ideas, I've also written a Prolog program that plays tic-tac-toe. Since the bounded-quantifiers can be translated, it means that you can already use this technique to write programs in Prolog. The language would need extensions to make it a convenient general purpose technique for writing interactive programs.

Implementation

Since the language has to be extended with quantifiers, we need a way to handle them. I think there are two papers that provide the details needed for implementing this in practice.

"A Proof Procedure for the Logic of Hereditary Harrop Formulas" - Gopalan Nadathur - 1997 "Logic programming with bounded quantifiers" - Andrei Voronkov - 1992

The former paper describes how to implement unbounded quantification. It implements scope for variables by assigning indices for them.

I haven't understood all the necessities in the later paper so I'm not done yet. Bounded quantifiers are important for representation of gameplay in logic.

I suppose I won't try to understand the paper further and just try implementing the bounded quantifiers through coroutining.

At first I thought these features could be implemented through introducing backtracking at success for positively quantified variables. Didn't get that approach to work.

Update: The paper about bounded quantifiers provide the following translations into Prolog.

A(X) :- B(X) ∧ C(X). ---------------------------------------- A(X) :- D(X), E(X). D(X) :- B(X). E(X) :- C(X). A(X) :- ∃V B(X,V). ---------------------------------------- A(X) :- B(X,V). A(X) :- (∃ V ∈ L) B(X,V). ---------------------------------------- A(X) :- D(X,L). D(X, [Y|_]) :- B(X, Y). D(X, [_|Z]) :- D(X, Z). A(X) :- (∀ V ∈ L) B(X,V). ---------------------------------------- A(X) :- D(X,L). D(X, []). D(X, [Y|Z]) :- B(X, Y), D(X, Z).

Additionally the paper describes subset quantifiers as well.

A(X) :- (∃ V ⊆ L) B(X,V). ---------------------------------------- A(X) :- D(X,L). D(X, Z) :- B(X, Z). D(X, [_|Z]) :- D(X, Z). A(X) :- (∀ V ⊆ L) B(X,V). ---------------------------------------- A(X) :- D(X,L). D(X, []). D(X, [Y|Z]) :- B(X, [Y|Z]), D(X, Z).

That's all what you probably need for experimentation.

Heuristics

You can go surprisingly far with just plain brute force. For example, in tic-tac-toe, if the player crosses the center, there are only 27600 possible ways the game may play from that point on and the idea of brute-force searching from the game tree isn't out of the question.

You need heuristics if you want the computer to play games where it cannot succeed with certainty.

Fifth generation computer project attempted to build computers around the concepts of logic programming. By doing so they introduced concurrent logic programming. Guarded horn clauses and committed-choice can be seen as an extreme form of heuristic for dialogical logic programming where every decision is done without search or prediction.

It may be possible to add support for augmenting the search algorithm itself with alpha-beta pruning and negamax search. While writing this I haven't yet explored this idea further.

Foundations and theory

Computability logic treats logical formulas as games played between two participants. One participant tries to prove the formula and the other participant tries to refute it. This logic pivots around the idea that conjunction and disjunction satisfy De Morgan's laws and that there are multiple different logical operators, not just two or four.

Computability logic seem to correspond with linear Logic. Deep inference helps in understanding linear logic.

An idea I haven't yet examined is the concept that computability logic allow formulas to share subtrees, forming directed acyclic graphs instead of tree formulas.

For logic programming the additive and multiplicative linear logic operators seem to found sufficient foundations that we can build on. Computability logic allows us to treat variables as quantifications of these operators.

| parallel | choice conjunction | a ∧ b | a ⊓ b disjunction | a ∨ b | a ⊔ b quantified | parallel | choice conjunction | ∀x. y | ∏x. y disjunction | ∃x. y | ∐x. y Demorgan laws: ¬(a ∧ b) = ¬a ∨ ¬b ¬(a ∨ b) = ¬a ∧ ¬b ¬(a ⊓ b) = ¬a ⊔ ¬b ¬(a ⊔ b) = ¬a ⊓ ¬b ¬(∀x. y) = ∃x. ¬y ¬(∃x. y) = ∀x. ¬y ¬(∏x. y) = ∐x. ¬y ¬(∐x. y) = ∏x. ¬y

Parallel operators behave as if two games were played at the same time. If it's conjunctive, you have to win both games. If it's disjunctive, you have to win only one of them, and you can use the other games to win that one.

Choice operators behave as if there was a choice for either player. Conjunctive choice is a choice of the opponent whereas the disjunctive choice yours.

Computability logic treats quantifiers as repetition within formulas. For example the following disjunctively bounded quantified choice formula:

(∐ X ∈ 0..2) even(X)

Would be the exactly same as writing:

even(0) ⊔ even(1) ⊔ even(2)

All variables in Prolog can be interpreted as this kind of choice-existential quantifiers such as above. Binding a variable is similar to doing a choice. If the choice is wrong, we attempt to backtrack from it as long as we have other choices we can examine.

We may also present prolog's disjunction through disjunctive choice:

even(0). even(s(s(N))) :- even(N).

Corresponds to:

even(X) :- X = 0 ⊔ (X = s(s(N)) ∧ even(N)).

It might be this gives us a sound concept of negation in the language, but I haven't concluded that it'd do so because I'm not certain. For example, the negation of the above one might be:

¬even(X) :- X != 0 ⊓ (X != s(s(N)) ∨ ¬even(N)).

There are linear logic programming languages that might reveal how to implement the parallel disjunction and parallel quantifiers.

Appendix A: Formulas are types

Prolog has the problem of people commonly not understanding it. They understand the syntax, and which logical connectives correspond to what, and how it's interpreted, but not what the programs mean.

Prolog formulas describe types of structures consisting of terms. For example:

type foo(X) ----------- foo(a) foo(b) foo(c) foo(a(a)) foo(a(b)) foo(a(c)) foo(...)

Without variables the formulas describe unit types. Such types have only one value. For example the following types refer to values of their shape:

sunny follows(bobby, michael)

There are also impossible types or contradictions. For example the following formulas would be such:

(X = a, X = b) false

It is impossible to satisfy either of the above constraints. They describe no values.

Prolog doesn't allow variables in position such as X(bobby, michael) , but technically this would not seem to be an issue to support. Such variable should likely have a structural interpretation, eg. X is some predicate that gives a true value on X(bobby, michael) and the constraint can be tested when the X is bound. HOPES seem to be doing this and Bill Wadge has written a nice blog post about this idea. We don't examine it further in this post.

To solidify this relationship between types and logic program formulas, lets examine the following Prolog programs:

nat(0). nat(s(N)) :- nat(N). even(0). even(s(s(N))) :- even(N). odd(s(N)) :- even(N).

The first predicate describes natural numbers.

?- nat(N). N = 0 ; N = s(0) ; N = s(s(0)) ; N = s(s(s(0))) ; N = s(s(s(s(0)))) ; N = s(s(s(s(s(0))))) ; N = s(s(s(s(s(s(0)))))) ; N = s(s(s(s(s(s(s(0))))))) ; N = s(s(s(s(s(s(s(s(0)))))))) .

It's almost as if we wrote N ∈ nat . The even and odd predicates work in a similar sense. If the predicate works, then the N can only represent a natural number while nat(N) applies to it.

By doing logic programming properly, you describe all the results that the program may produce.

Appendix B: Motivation

Interactive form of logic programming, such as what this dialogic logic programming is, could allow effortless writing of programs that appear incredibly intelligent.

There is also a slight suggestion that interactive logic programs could be incredible sort of fun. After all, if you have a program that plays chess, only chess you can play. But if you have model of chess with heuristics, then you can do far much more than just play chess against it.