We’re here today with Leon Starr (also on twitter). Leon is one of the pioneers of executable models. He has written about the topic on this same blog, published several books covering different aspects of executable UML (and executable models in general) and created a site devoted to teaching these same topics.



Leon has just published a new book : Models to Code (with no mysterious gaps), co-authored with Andrew Mangogna and Stephen Mellor (one of the founding fathers of Executable UML and the author of a the bestseller book on this topic, and, by the way, also a fixture in all my presentations thanks to this sentence 🙂 ) . As Leon puts it, the goal of the book is to show, in great detail, how exactly to get from executable, platform independent requirements models to efficient production code. No more hand waving. All code and models are 100% open source.

The book was the perfect excuse to sit down with Leon and talk at large about executable models from all kinds of perspectives. I did learn a lot from him and I’m sure you’ll do as well so let’s start with the interview with Leon Starr.

Who is Leon Starr? Can you introduce yourself to our readers?

I’ve been actively involved in model-based software for over 25 years. Long before UML came along we were building executable object-oriented models!

My focus was not so much being a computer science or at least not one that devotes himself to building standards or methodologies but on applying them on real projects. I take less of a science perspective and more of an engineering one, helping teams to successfully apply executable modeling approaches in practice.

You always had this passion for executable models

In the early days, when I started working with models, having been a programmer before, what frustrated me is that with a program I know whether it works or not and what it does. There is a way to objectively describe and observe it and test a program. When you have something that behaves in a deterministic way you can gauge immediately if it solves your problem or determine at least if the problem you need to solve is being addressed.

If you do a contrast with modeling in the early days, you splash a lot of symbols on a piece of paper and you ask five different people what your model means and you’ll get 5 different answers. It’s too easy to hide the problem behind the symbols. You can just say, “here is where we are going to solve this problem”. With non-executable models, it is too easy to sweep problems under the rug.

What I like about executable modeling is that it forces you to look at an actual solution and to evaluate whether it is correct or not.

I’m sure at this point some readers will be thinking: why don’t you just program it ?

The issue there is the right tool for the right job. Are you working at the best level of abstraction to solve the problem you need to solve? That’s one reason why not many people use assembler language nowadays.

The question then is: does your modeling language offer something other than the same building blocks you’d find in a programming language? I’d say this is one of the main reasons for the failure of UML to take the world by storm.

Surprisingly, this low abstraction level is considered to be a key feature by many tool vendors who emphasize their round trip capabilities from models to Java (or C++ or…) and back. You don’t do roundtrip from assembly language to Java and back because they are clearly at different levels of abstraction.

An argument could be made that drawing pictures of your code has a purpose but that’s not what we’re really talking about here.

Look at Simulink, its building blocks don’t look at all like programming primitives. They look like the subject matter you’re trying to model. The building blocks are very relevant to the kind of mathematical problems you’re trying to solve. To me, this is a sign of a successful language. And if you look at the code Simulink generates, it looks very different from the model and it’s pretty obvious that looking at the model is a better way to understand the problem.

What kind of models / languages do you use then?

We use the standard UML notation but with quite different model level semantics. We follow the semantics published in the Executable UML book.

Our focus is to ensure the models are completely executable, without any inserted code. Just action language (in the book we use Scrall for this), class diagrams and state models. You could also use Alf if you prefer. Still, Alf semantics are akin to those of object-oriented programming. There’s almost the implicit assumption that when using Alf of course you’re going to target Java or an OOP language. In our book we target C. Our semantics are based more on mathematics (predicate theory, set theory and relational theory) and are therefore more general and applicable to other platforms.

The other focus is platform-independence. Imagine you’re building a model for a cardiac pacemaker. And you need a model of how the heart behaves (what events it’s waiting for, how it reacts,…). The models of this are clearly platform-independent. The heart muscle has nothing to do with Java, or Windows, or any such software technology. You must be able to model the physiological reality independent of all that. Once you’ve tested the model, the next task is to map that model onto a particular platform. Only then can you start thinking on the degree of concurrency you want to have and how to leverage all the other specific technologies. But you don’t want this process of adding software technology to destroy your original model.

Can you elaborate more on your workflow to go from models to code? Is it similar to MDA?

Something like that. In the book, we present a reference workflow that takes models to code through a translation process. To be sure, this reference model assumes an ideal world where you have all the technologies and metamodels you need.

In the book, we instantiate this workflow but cheat a bit and take some shortcuts to be able to exemplify the process of ending up with real code that works according to all the demands of the platform without having to modify the original model.

In our particular approach, we build a model of the platform. It’s not really a metamodel but a representation of how we want to see the models implemented in that platform. For instance, in the book, we translate classes in the model to various patterns of C structs. Our mapping process would then instantiate the platform model with information about the C structs to be created from the UML classes.

This mapping is defined with a textual DSL that allows you to re-express the model according to the platform features and the design decisions you want to adopt in the translation. For instance, if you have a generalization in your class model, there are are two different patterns to choose from to translate it into C structs: either a single struct with a union of the attributes or multiple structs linked with pointers. When mapping your model with the DSL you don’t need to code how to generate the structs you want. You simply indicate which design pattern you want to apply.

Once you have this DSL file ready, you can generate the code. Next Figure shows the C code generated from the previous example.

We wanted the readers to be exposed to this kind of translation process. We don’t claim ours is the best approach nor that it’s the only one. But we believe that by looking at our concrete examples you can get a feel for what principles are applied and how they could be used when mapping your models to a different platform.

The purpose of this book is not to teach how to model. Most modeling books focus on the modeling language syntax or patterns and only at the end do they give a hint about how that can be translated into code. By contrast, our book starts with the models already defined and focuses on how to actually map those models into code and how to apply design features to those models to successfully map them onto the platform.

What are the main platforms you target with your executable models?

One of the myths I hear over and over is that the problem with translation is that you need to create a distinct platform model for every single platform. My answer is “yes, if you do it stupidly”. But programmers aren’t stupid and they don’t.

It is important that people are aware of all the decisions and intricacies of moving models to code, even if they end up using a “lighter” version of our method, like current low-code platforms where most of the decisions are already made for them.

There are endless books and papers and talks on the generalities of getting from models code. We wanted to take a different approach and we thought let’s just take a particular example and drive it all the way through.

It doesn’t matter. You can use any tool you want to build your models since at the design phase you’ll need to script your model and the design decisions with our DSL and that’s the input of the code generation.

Of course, you could build an export tool that links with an existing popular tool and generates a first version of the DSL for you to complete. Everything is open source so feel free to do it!

I do like graphical models, but I think we offer a good balance. You specify your platform independent models graphically and then your design models textually and benefit from all of the powerful editing features of programming editors.

Have you ever felt the need to use DSLs for the application domains you’ve worked on?

For any subject matter, you could take your UML model and use the concepts in there to specify a DSL for that particular domain. There’s nothing wrong with that and, in fact, we’ve often used that method to populate our Executable UML models. In many of the projects I’ve worked on, we combine several languages (e.g. simulink + UML). The important thing is to use the right tool for the job.

This combination of languages requires a platform-independent partitioning of the system. We do the partition per domains. A domain is a subject matter (set of concepts and logic that ties the concepts together) and then target each partition with the best language for it. This minimizes the dependencies. Still, every domain needs to be executable on its own.

There is another trade-off here, you typically don’t want too many languages in play so you have to balance all aspects.

Do you think there is a growing interest in executable models?

Now that models are becoming more executable, now that this is becoming an expected feature, it’s a whole new game. We can now do lots of things we couldn’t do before with non-executable models and I think this will generate a lot of renewed interest in modeling.

One of the major drawbacks for the adoption of modeling was the lack of a clear path to get those models into code. The adoption does not depend so much on your particular language notation but on understanding the role and the transition process from the models to code.

Understanding this process also helps you to build tighter models that express only what is needed by the application and don’t try to mix in implementation details that can be better handled elsewhere.

What makes a model useful is what you leave out of it. You can never complete a model if you don’t have a systematic process for deciding what kinds of details are in and which are out.

And where are we heading?

We are still in the early days. Once people understand how to go from models to the code, there is a bright future for modeling.

I know this is sacrilege, but I think that in many ways, UML killed modeling or better said, it killed useful modeling. This was not the OMG’s intention of course, but it was the result. UML encouraged people to draw pictures of the code, which to me it is not modeling or it’s just the weakest possible example of modeling. And that didn’t succeed very well.

With Executable UML and translation based code generation, I think we’re doing something fundamentally different. You have to make sure your modeling language provides value beyond shaping your code. It must to allow you to express your subject matter at a level of abstraction that exposes the concepts, requirements and rules of that subject matter clearly without regard to any particular implementation. And it must allow you to objectively test and run your models.

And this is just the first book we are putting out. My next project is a “Requirements to Models” book that complements this first one by explaining how to get from the fuzzy requirements in somebody’s head to fully detailed platform independent models.

I hope you have enjoyed this conversation with Leon. And while we wait for this future “requirements to code “ project to materialize, go ahead and grab your copy of the Models to Code book. Enjoy your reading and let us know your thoughts on the book or on the topics covered in the interview.