This is a semantic issue, and does not magically get solved by macros. Sorry, I was not clear enough. I don't assume that macros make the compatibility issues to completely go away - on the implementation side. What I mean is that the user of libraries that are designed inside a language framework that allows "language extensions" to be implemented in a way that they can easily be made to be compatible would't have to worry about the issues. Also, I can see that you like to limit the discussion to macros. Please note that I'm not just talking about macros. Don't try to put words into my mouth - it will backfire. before you try making your contribution to the field of type theory Sorry, I have not been clear enough. I'm not particularly interested in type theory. I believe I understand the basics of type theory well enough to be able to confidently say, for example, that there will never be a single perfect type system for all uses. However, I'm interested in metaprogramming. I'm interested in the possibility of using a metaprogramming facility provided by an extendable, but invariant, language framework to create a statically typed "embedded language" that integrates extremely well with the language framework and other similar embedded languages and language extension. Second, in case you are interested, the simplistic type checking system (a couple of macros and the simplistic type inference algorithm) I implemented in Scheme does indeed define a type datatype. For example, for reasons of simplicity, types are externally represented using a simple list based structure. For example, the polymorphic identity function has the type (-> 0 0) and the polymorphic map-function has the type: (-> (list 1) (-> 1 0) (list 0)) . The (positive integer) numbers represent bound type variables. The symbols -> and list are type constructors. Not if the compiler makes its functionality available as libraries which can be reused to write other program transformation tools. If I interpret your suggestion simplistically, like you tend to do to my ideas, then your suggestion, which has been known to me for years, doesn't come even close to solving the compatibility issues between separately developed language extensions. For example, it doesn't allow a language extension to "reason about" a special construct, whose semantics are defined by some other separately developed language extension. In summary, I see no way that external mechanisms would yield the most modular, most elegant results. The extension facility must be an integrated feature of the language framework. Unless you can point to an example that, in your opinion, yields the most modular language extension framework and uses external tools and has demonstrably been used to define several dozen independently develoved language extensions, then it will be next to impossible for you to convince me, on this issue, about the validity of your viewpoint. I am surprised, though, that you think `reduction of duplication' is not addressed by modularity[modularity/higher-order functions] One of the problems is that when you use higher-order functions, all bindings need to be explicit. So, assuming that you would define a parsing function that would take a datatype that describes the syntax to be parsed and would contain slots for higher-order functions to implement the semantic actions, then each of those semantic actions would need to define repeated parameter bindings. Are you understanding me yet? Yes, I think I understand your point. It doesn't mean that I would agree with you. Imagine, for a moment, that instead of embedding CLOS into Common Lisp using macros, CLOS would have been implemented by defining a completely new incompatible dialect of Common Lisp. If you don't understand what I'm getting at here, then I don't think that we need to discuss issues that relate to the topic of "Defining a new language vs Defining a new extension library", because we simply do not agree on the issue. The way I see it, it does not make sense to create a new language [dialect] each time a simple language extension is desired. It is stupid, naive, wasteful, reinventing of the wheel. As an aside, I hope that you understand that I'm actually pretty competent in using higher-order techniques and macros. I know very well when something can easily be implemented using higher-order procedures. This only suggests that either the Java designers did not design their language well enough, or that the Java programmers have not been educated well enough in how to properly use the language. I would agree on both counts. I see programmers solving real problems rather than pontificating about how to design a more `natural' syntax or designing half-baked language extensions which neither solve real problems nor facilitate solving them. hmm... I'm not sure how to interpret that. Perhaps I should take it as a personal insult that intends to imply that because you don't agree with my idea of an extendable language, then I must be a bad designer that complains about his tools because he doesn't know how to use them. If you are interested in seeing the kind of designs that I produce, then I'd instruct you to take a look at an ongoing school project whose JavaDoc documentation is available here. (I've returned to school to complete my degree and continue as a researcher after 6 years of working in the industry. I have about 13 years of experience in programming (at the time of writing this). Of course, the last ~10 years have been much more intensive than the first couple of years.) The following packages might be particularly interesting to LtU readers: - Framework for functional objects - Higher-order procedures in Java - Template methods for Graph algorithms Feel free to e-mail critique relating to the software to me privately - or post it publicly if you want. Do note that the choice of Java was an external requirement. I would not have personally chosen Java. Products, sums, recursive datatypes Tautological, dear Watson. Perhaps I don't understand what you are talking about here, but if you are talking about the same things that the above terms describe to me, then I have to disagree. Things such as variants and pattern matching can rather easily be described using syntax-case macros. As an aside, I have often observed that many "naive, brainwashed, static typers" confuse static typing with the ability to design data representations in certain ways. When I first learned Ocaml, it also first seemed to me that variants and pattern matching had something to do with static typing. Shortly, however, I understood that the two issues are ortohogonal. higher-order functions I would also add lambda-expressions, or anonymous procedures, to the above. Now, I think that this can be done using a reasonably modular (macro) transformation. Of course, if the base language doesn't support any way to invoke a function indirectly, the task becomes much more difficult. However, I think such a requirement is not very realistic. Assume that we would be working in Scheme, except that it would only support the definition of top-level functions and would also have special syntax for calling functions through "function pointers". We could then implement higher-order procedures, lexical binding and anonymous procedures using a macro system such as the syntax-case macro system. The basic idea is to either redefine the top-level form for definitions or, if redefinition is not possible, to provide an alternative form for top-level definitions that transforms the definitions in the following ways: - Transforms the code to perform closure creation. A closure holds the arity of the procedure, the pointer to the procedure and the parameters to the procedure. - Replaces calls to [functional] values by code that dispatches to a closure. - Performs lambda-lifting. Of course, we would then have to use the special macro for top-level definitions to be able to use higher-order procedures and lambda-expressions. Code that would not be written using the special top-level definition macros would have to use special protocols for procedure calls, but it would be technically possible. Code using the new top-level definition forms could also call code that doesn't use the new top-level definition forms. call/cc This can be done using a similar, reasonably modular transformation as previously explained. The only essential requirement is that the core language must perform tail-call optimization. Otherwise the task becomes much more complicated. [Think trampolining transformation.] state I'm not sure what you mean by this. Please explain this more carefully. Are you talking about a context in which the core language would be purely functional? If so, then you must be aware of the fact that it is indeed possible to simulate stateful programming in a purely functional language, but it is not particularly efficient. [Think interpretation.] nondeterminism, laziness, strictness, static typing... All of these could be done reasonably modularly using techniques similar to the ones I described above. shall I go on? Please do. So far you have not provided any examples that I have not already thought about before (except perhaps "state", because I'm not certain what you mean by it), and I'd definitely like to hear about issues that I haven't thought about, because they may turn out to be important. Please note that I'm not claiming that by designing a hopelessly brainfucked, but technically Turing-complete, core language and then attaching a powerful syntactic abstraction system on top of the idiotic core language would be a good way to design the kind of language framework I'm thinking about. I haven't, of course, yet designed the framework, but at the moment it would seem to that the minimal core language I would use: - would offer O(1) time imperative updates, - would have full (syntactic) tail-call optimization, - would probably support partial application of procedures, - would offer both dynamically checked and unchecked primitive operators. Of course, the above list isn't complete or final - don't interpret it as such. Let me anticipate your response. You will say these these things can be implemented by surrounding your entire program with a macro. Good. I can see that we agree here. This is the exact reason why I think that a simple macro system may not be good enough. On the other hand, I can already see reasonably modular techniques for implementing these things using a macro system similar to the syntax-case macro system of Scheme. However, I can also imagine the benefits of having a fundamentally syntactic abstraction mechanism, or a compile-time metaprogramming facility, or a integrated program transformation system, that would indeed allow such things to be implemented highly modularly.