As a programming language, Loop is compact JVM language influenced by Haskell, Scheme, Ruby and Erlang. It also tries to bring together the best features of functional programming and OO languages, in a consistent and pragmatic manner.

Programs are compiled on-the-fly to optimized JVM bytecode and therefore suffer no performance penalty to interpretation; all while maintaining the quick, edit-and-run responsiveness of interpreted code.

The Loop file structure is:

module declaration import declarations functions & type definitions free expressions

Here's an example of a loop program:

module mymodule require othermod require yet.another class Pair -> left: 0 right: 0 main -> new Pair() # comments may appear anywhere # free expressions must appear last print('mymodule is go!')

InfoQ had a small Q&A with the creator of loop, Dhanji R. Prasanna, who’s an ex-Google, co-author of the JAX-RS spec and author of “Dependency Injection: Design Patterns” by Manning:

InfoQ: How does loop compare with the rest of the JVM languages?

Dhanji: I don't want to do a nitty-gritty feature comparison, but I would say the philosophy of Loop is about providing a consistent, simple and joyful experience in coding. All features are designed with this comprehensive outlook in mind and a care for how features interact with each other, both syntactically and semantically. In other languages you sometimes have multiple ways to do the same thing, almost as a feature of the language, and many of these feel bolted-on. Whereas in Loop I've tried to narrow down the canonical ways of doing things so that they are concise and simple, and result in an attractive, comfortable syntax. It should be as easy to read code as to write it, and just as much fun. One other point of distinction is that Loop compiles directly to JVM bytecode, but does so on-the-fly. Meaning that it behaves and *feels* like a scripting language (and REPL-esque, like a Lisp), but it actually performs considerably better than an interpreted language. I will let others run benchmarks, but so far in my quick tests, Loop is extremely fast. I have also put an emphasis on startup time, so it starts up nearly as fast as java itself allows; a fact that is often ignored by many modern JVM langs. Loop also interoperates tightly with Java. It is really easy to call Java methods or instantiate Java objects from inside a Loop program. Lists, sets and maps are just java.util. members, but with several extensions (and similarly for Strings). This is different from some languages that balance two sets of data libraries in order to provide extensions. And finally, Loop has built-in support for concurrency right from the start, with immutability and safe sharing of state as integral features.

InfoQ: You mention that many of loop's features are inspired by languages like Haskell, Scheme and Ruby. Would you like to give us a few examples?

Dhanji: Sure. It's always hard to address this because when you say "inspired by", people tend to think "is a copy of" and will look with a fine-toothed comb for deviation. The takeaway from my perspective is about syntax influences from these languages. In particular, the easy ones are pattern matching and "where" & "do" blocks from Haskell, the type system, modules, TCO and lexical constructs (closures) of Scheme, and ideas like symbols and free-form scripting from Ruby. One syntactic example of this combined influence is the fact that you can call functions in postfix notation: print(36) # can be written as: 36.print() ... which makes them read like a Ruby method call, but in actual fact they are simply polymorphic (overloaded) functions. I find this very useful for improving the readability of some *particular kinds* of code, especially when "extending" the functionality existing Java objects. Of course, there is a balance to be struck, but I believe that will come as Loop matures. The deeper, semantic influences of Haskell and Scheme (the latter in particular) are also present in the functional design of the language. One example is to get away from stateful, encapsulation-oriented design to a more stateless, declarative design. Like Scheme, Loop also features an impure superset for IO; but on the other hand enforces immutability whenever you deal with concurrency. The latter is the philosophical influence of Haskell. Additionally the emphasis on making declarative code easier to write (and read) is also an influence from Haskell. I like the philosophy that code should read like a solution, rather than a laundry list of instructions; in other words, emphasizing the "what" rather than the "how" of the program, and Loop definitely follows this philosophy.

InfoQ: It seems that loop puts lots of emphasis on concurrency and provides a built-in message-passing abstraction. Would you like to explain to us how that compares with other popular concurrency technologies, either on the JVM (languages or frameworks) or beyond (eg. Erlang)?

Dhanji: This is a good question. There's a lot of prior art for this from Erlang. There are two primary methods for doing concurrency in Loop, they are both built into the language, and also useful in conjunction with each other: - message-driven channels (an event-oriented abstraction over message-passing, queues and worker pools) - software transactional memory (a lock-free, atomic, consistent scheme for sharing mutable state) With the former the entire abstraction is taken care of for you. You set up any number of lightweight "channels" which can be configured to execute in parallel, to chew through large numbers of tasks; or to process a single task at a time (serially) in buckets. This actually provides a really easy way to create naturally *sharded* event queues. Since channels are lightweight you can create tens of thousands of them cheaply and use them to shard task execution, for example by username. There is also a lightweight, persistent local memory available to each such serial channel, making it easy to implement incremental task processing. Loop also ensures that worker threads are evenly balanced over channels, using a configurable "fairness" factor. All of what I have described so far is available right out of the box. And on top of that one can further configure each channel to have its own dedicated worker pool, and so on. I mentioned the lightweight persistent memory for serialized channels earlier--transactional memory on the other hand is a much more powerful construct, similar to "optimistic concurrency" used in databases, if you are familiar with that. The idea is that nothing ever locks, even when writing. This type of memory is optimized for extremely high read throughput and non-blocking write throughput. This is built right into the language grammar itself: update(person) in @person -> this.name: person.name, this.age: person.age Note the "in @person", telling Loop to transact on the @person cell. In this method Im updating the @person "transactional cell" with new data from the argument to update(). The "this" pointer refers to the current cell which is in a transaction throughout this function. When the function completes, the changes to @person are visible atomically to all other threads, or not at all if the transaction fails (similar to a rollback). All other threads (even those not in a transaction) continue to see a consistent @person cell until the transaction executes and then immediately see the new person as a whole, with no locking or waiting. The neat thing here is that reader and writer threads *never* block. This feature is still somewhat in alpha, as Im trying to get the semantics nailed down, but I feel along with the rich set of channels API it makes concurrent programming in Loop elegant, powerful and easy to understand.

You can contribute to loop on Github.