For us, HPX is a ‘A general purpose C++ runtime system for parallel and distributed applications of any scale’. While this is quite a mouthful, we mean every word of it. All of the recently published posts on this site so far have focused on the APIs HPX exposes for purely local operation on a single machine. In this installment I would like to start talking about how HPX exposes distributed functionality, i.e. how to use HPX to write truly distributed applications. As we will see, by introducing just minor extensions to the C++ standard the user is able to write homogeneous code without having to pay attention to any differences between invoking functionality locally (on the current node) or remotely (on any other node in a cluster).

The Active Global Address Space (AGAS)

The key concept introduced by HPX supporting and enabling the uniform syntax for local and remote operations is what we call a ‘Active Global Address Space’ (AGAS). In short, AGAS is a 128-bit virtual address space spanning all localities a distributed application is running on (a locality in the simplest case is a node in a cluster). Currently, this is a purely software based solution, thus the programmer has to decide what objects and what functions should be visible globally (should have a global address). Since the address space spans all localities, every global object’s address is unique system-wide. The figure below depicts the part of the system which is globally accessible in light gray and the objects which have a global address in dark blue. The parcel-port represents an abstraction of the communication channel used for inter-locality data transfer. In HPX, we call the messages sent in between localities – parcels.

The dark gray squares depict client-side objects which refer to the global objects using their global addresses. The big advantages of such a system are

Uniform access to a global object, regardless whether it is currently placed locally to the caller or on a different locality

Objects can be moved to a different locality without a need for updating any references to them

The uniform access is available on other systems as well (e.g. on those built on top of the Partitioned Global Address Space (PGAS)), but none of these systems support to move around arbitrary objects between localities.

Actions

In order to remotely invoke a function any system has to provide some means of transferring the information about the function (and its arguments) to the node where it should be executed. Unfortunately, in C++ there is no portable way of sending a ‘function’ over the network. We have to somehow establish a relationship between the function and some integer or string (or similar) uniquely identifying the function. In HPX we decided to associate each function which has to be invoked remotely with a unique type. We call those types ‘actions’. At its core, this can be easily done in C++ by specializing templates. Consider:

// The main action template type is left unimplemented template <typename Func, Func F> struct action; // Specialize the action template type for global functions template <typename R, typename ...Args, R (*F)(Args...)> struct action<R (*)(Args...), F> { // Store actual parameters for function invocation tuple<Args...> arguments; // This will be executed on the source locality template <typename ...Ts> action(Ts&&... ts) : arguments(std::forward<Ts>(ts)...) {} // This will be executed on the destination locality R invoke() { // Invoke F using the tuple elements as parameters return invoke_fused(F, arguments); } // This serializes/de-serializes the action type template <typename Archive> void serialize(Archive& ar) { serialize_tuple(ar, arguments); } }; // Expose the global function 'foo' as an action int convert(string val) { return std::stoi(val); } typedef action<decltype(&convert), &convert> convert_action;

So, actions are special types we use to describe possibly remote operations. They are also used to serialize and de-serialize all the information necessary to transport the function (and its arguments) to another node over the network. For every global function and every member function which has to be invoked distantly, such a special action type must be defined (amongst other things). To simplify this, HPX provides the special macro HPX_ACTION which can be used to make a function remotely callable. Here is an example demonstrating this:

// This will define the action type 'convert_action' which // represents the function 'convert' above. This will also // generate all of the necessary boilerplate making the // function 'convert' remotely callable. HPX_ACTION(convert, convert_action);

The process of invoking a global function (or a member function of an object) with the help of the associated action is called ‘applying the action’. Actions can have arguments, which will be supplied while the action is applied. At the minimum, one parameter is required to apply any action – the global address of the locality the associated function should be invoked on (for global functions), or the global address of the targeted object instance (for member functions).

The Basic HPX Function Invocation API

The following table shows that HPX allows the user to apply actions with a syntax similar to what the C++ standard defines for ordinary, local functions. In fact, all action types have an overloaded function operator allowing to synchronously invoke the action. Further, HPX implements overloads of hpx::async allowing to asynchronously invoke an action, semantically similar to the way hpx::async already works for plain C++ functions (as described earlier). Additionally, HPX exposes hpx::apply for fire & forget operation (amongst other API functions), all of which refine and extend the standard C++ facilities.



This table gives an overview of the basic execution API exposed by HPX. It shows the function invocation syntax as defined by the C++ language (dark gray), the additional invocation syntax as provided through C++ Standard Library features (medium gray), and the extensions added by HPX (light gray). In this table the following symbols are used:

f : function to invoke

: function to invoke p... : (optional) arguments

: (optional) arguments R : return type of f

: return type of action : action type defined by HPX_ACTION() encapsulating f

: action type defined by encapsulating a : an instance of the type action

: an instance of the type action c : the client object representing the remote object the action is applied to.

The future type returned from hpx::async is indistinguishable from any other future instance returned from other API functions. This warrants seamless integration of various asynchronous providers through a uniform interface.

Let’s now tie everything together in an example. For brevity I’ll show the in-famous Fibonacci calculation:

int fibonacci(int n); HPX_ACTION(fibonacci); // defines fibonacci_action int fibonacci(int n) { if (n < 2) return n; // Spawn Fibonacci on another locality fibonacci_action fib; hpx::future<int> f = hpx::async(fib, find_other_locality(), n - 1); // In the mean-time execute the other calculation locally int n2 = fibonacci(n - 2); // Wait for the future to return its value return f.get() + n2; }

In this example, the function find_other_locality() finds the locality where to run the Fibonacci function on. It returns the global address identifying the target locality. We intentionally don’t show its implementation as this is outside the scope of this post.

For each (possibly remote) operation, HPX resolves the global address identifying the target object using its implementation of AGAS. If the target of the operation is local to the invocation, a new HPX thread will be created on the current locality. This is very similar to what a purely local operation would do. If the target of the operation is remote, HPX sends the action through the parcel-port to the destination, where the encapsulated function will be scheduled as a new HPX thread. From the user’s perspective, in both cases the semantics are 100% equivalent. The only difference is the locality where the HPX thread executing the required function is scheduled.