Posted on May 28, 2016

About the Project

This is a small demonstration project showing how a simple byte code interpreting stack machine (virtual machine) can be built with Haskell. It is not a production VM nor of any particular practical use but is rather a simple demonstration of how a stack machine can be built. All code is available in the github repo https://github.com/andrevdm/SimpleHaskellStackVM

I built this for mainly as a project for learning Haskell, i.e. something a little bigger to work on. So NB this is probably not idiomatic Haskell, and may have some newbie mistakes. Hopefully it is interesting enough despite this…

Stack Machines

A stack machine is a real or emulated computer that uses a pushdown stack rather than individual machine registers to evaluate each sub-expression in the program. A stack computer is programmed with a reverse Polish notation instruction set. - wikipedia [1]

Stack machines are simpler to implement than register machines but are still practical even for production VMs.

Writing the VM in Haskell

Virtual machines are typically written in a low level language like C for maximum efficiency. They also are typically written using mutable data structures. Here I’m using Haskell and pure functional data structures. I have absolutely no performance goals, so there are no constraints I need to worry about.

A few design decisions

I’m using a Data.Sequence rather than a list. While I’m not concerned about performance using a linked list to do random access is still a bad idea Try to avoid bottom (⊥), so use “safe” versions whenever ⊥ could otherwise be returned (toEnum, index, head etc) I’m using Stephen Diehl’s Protolude [2] and removing the default prelude

Using the immutable data structures like Data.Sequence had worked out nicely and means that as the VM runs you can keep a full history of the VM’s state. So you have everything (apart from time) you need to build a nice visualiser / ‘time travelling’ debugger.

The VM

Opcodes & byte code

The opcodes are the operations that the virtual CPU can perform. These are simple operations like push, pop, add and jump. The opcodes are encoded as bytes together with the opcode parameters (e.g. the value to push) this forms the byte code.

Here is the current set of opcodes understood by the CPU

The virtual CPU

The data structure representing the CPU is defined below

This is fairly minimal but it’s more than enough for a simple VM like this.

Some things to note

The stack (cpuStack) is part of the CPU. This makes sense for a stack machine since a stack is core to everything it does. It also means that as the CPU runs you get a full history of each stack state along with the CPU flags at the time each opcode was run

There is no need for a stack pointer since the stack is a ‘dynamically’ growing Data.Sequence. I.e. sp always points to the head of cpuStack.

The instruction pointer (ip) points to the next instruction to run.

In this implementation the byte stream is fixed (no self-modifying code), so there is no need to copy it on each CPU operation

The frame pointer (fp) is discussed below in the section about function calls (Call & Ret)

The byte code assembler

Rather than writing the byte code in hex a very simple assembler is used. Later on additional assemblers and compliers can be layer on top of this low level assembler which does little more than convert opcode mnemonics to byte code.

An operation can take parameters from the byte code stream. For instance a push instruction takes a parameter that is the value to push onto the stack. The type that represents this is the inventively named OpAndParam

Having a type that defines the number of parameters an op takes and how many values it pops off the stack allows the assembler to perform some basic checks. The interpreter can also use this definition when running the code.

The instructions can then be setup and a map created from opcode to Instruction

The assembler then does nothing more than converting the opcode enum to a byte (an Int in the code but it would be serialised as a byte) checking the number of parameters for each opcode. It is small enough to be pasted in full here

The interpreter

With all of that in place the interpreter can finally be written.

The interpreter takes the output of the assembler or bytes loaded from a file and runs it producing CPU state along the way. In debug mode the interpreter stores all the states as it interprets, in “non-debug” mode only the last state is kept. Note that currently the interpreter is always in debug mode

Before looking at the implementation of the interpreter its worth going over a few examples of how it should operate

Push & Pop

In the first example the value 123 is pushed onto an empty stack. Then the value 456 is pushed. The head of the stack is at the “bottom”

A pop is the opposite of a push. The pop operation get the most recent value from the stack (FIFO) in this case 456 leaving 123 on the stack.

Add

An add operation pops the top two items from the stack, adds them and pushes the result back onto the stack

Look at the definition of the instructions for these three operators

From this you can see that the Instruction shows that a push takes one parameter, a pop takes no parameters but pops a single value off the stack and an Add pops two values off the stack. As noted in the code comments opSimple indicates that these are simple operators with fixed stack effects.

Jmp

The Jmp operator performs an unconditional jump to a fixed location relative to the start of the code stream. I.e. Jmp 100 sets the instruction pointer to 100 and execution continues from there.

Consider the following simple list of ops in Haskel

This gets assembled into the following byte code

The Jmp instruction causes the CPU to set the instruction pointer (ip) to 3. In this example that means that the Nop at offset 2 is skipped and execution continues with the Halt operation at offset 3

Branching (Beq, Bne, Blr, Blte, Bgt, Bgte)

Branching is a conditional jump. The top two values are popped off the stack compared based on the type of conditional operator.

In the following example the values 1 and to are pushed. Bgt is executed and if 2 is greater than 1 then the CPU jumps to location 7. If no then it continues executing at the location after the branch (6)

This is what the history of the CPU would look like for the above example

Call & Ret

A function call is much like a jmp except that you have to store an address to return to. You could have two stacks, one for values and one for return address. It’s more common however to have a single stack with “stack frames”.

As a trivial example consider a the case when there are no parameters

The CPU does the following

Executes the call operation and pushes the return address onto the stack, i.e. the next instruction after the call . Here it is the halt at 02.

operation and pushes the return address onto the stack, i.e. the next instruction after the . Here it is the at 02. The ip is set to 03, the offset of the function, and the CPU executes the function ( nop )

) The ret operation gets the return address (02) from the stack and updates the ip

operation gets the return address (02) from the stack and updates the ip The halt at 02 is executed.

However this simple scheme does not work when you have variable numbers of parameters, locals etc. This is where the frame pointer (fp) and stack frames come in.

A stack frame is the set of data stored on the stack for each method call. In this virtual machine that is 1. The return address 2. Parameters for the function 3. The previous frame pointer value

As an example consider a function that adds two numbers and returns the sum.

The caller pushes the two values to be added (123 and 22), these are pushed in reverse order. I.e. parameter 1 last.

The current frame pointer (-1) is stored. If no function has been called this will be -1. Further down in the example this will make more sense

The return address (06) is pushed onto the stack

The frame pointer is set to the start of the stack frame (02), i.e. position 2 from the end of the stack

The CPU sets the instruction pointer and “jumps” to the function (09) being called.

The function can then push any values it needs onto the stack and do its work.

Here the function loads the two arguments LdArg 1 and LdArg 2 (i.e. get params to top of stack) and then calls Add to add them

and (i.e. get params to top of stack) and then calls to add them In this VM a function always returns a single value on the stack

Then when a ret operation is executed the CPU needs to do the reverse.

operation is executed the CPU needs to do the reverse. The return value is popped, and the stack shifted back to the fp

Notice that the original parameters are still on the stack. This is normal for the cdecl calling convention. The caller is responsible for cleaning up.

The PopPrev operation is used to do the parameter clean-up by the caller. It pops the number of items specified before the item at the top of the stack

The important thing to notice is that since the old frame pointer is stored on the stack you are able to call multiple functions and always be able to return to the previous function. Also having the fp lets you unwind the stack to the frame no matter how many items the current function may have pushed onto the stack, i.e. you don’t need to try and track that

Here is the output for the code above

The interpreter code

The interpreter converts a sequence of byte codes into a list of CPUs. Final CPU is the final state

Start interpreting with an empty CPU

The code does the following checks, to get a valid instruction for the byte

Check that there is a start CPU Check that there is an opcode in the sequence at the ip index Check that the opcode is valid, i.e. belongs to the Operation enum Check that the opcode is in the instruction map instrByOp

Get the parameter bytes from the code stream. E.g. Push takes a single param, the value to be pushed onto the stack

Pop the required number of values from the stack.

Check that both operations succeed

Finally the core interpreter code can be called. Since the params, pops are now stored as lists and all checks performed this code is quite simple.

Remember that in this VM there are two types of Instructions ; simple and complex. Simple instructions are fully defined by the Instruction . Complex instructions have full control over the CPU

; simple and complex. Simple instructions are fully defined by the . Complex instructions have full control over the CPU Simple instructions are given an empty CPU and return a CPU with the values that need to be changed. For example simple instructions can not pop extra values or change the fp

Complex instructions are not fully defined by the Instruction and can change the CPU in any way they need to.

Simple instructions

Nop just returns the current CPU

Push sets the stack (appended with current stack in the interpreter code above)

Pop also just returns the CPU. This is because as a “simple” instruction the fact that it pops a single value is configured in the Instruction and the interpreter will do that.

The branch instructions call the branchIf function with the appropriate conditional operator

The branch checks the predicate (prd) and if it is True the ip is updated to the ip specified by the branch.

Complex Instructions

Call creates a new CPU, setting the ip, fp and stack frame as discussed above in the section about Call & Ret

Ret gets the old fp and the return address from the stack frame and constructs the new CPU to return to the caller.

What’s next?

This explanation of the stack machine is significantly longer than the code for it :). Hopefully you’ll be able to see how easy creating a simple stack machine is. There are many things than can be built on top of this. E.g.

A higher level assembler with support for labels, i.e. don’t make the user count offsets

A simple higher level language that generates this new assembler code. E.g. stack/concatenative language or a functional language

Many more operators



Code optimisation

Support for globals

IO

…..

See also