The rapid increase in the processing power of computers in the past few decades has enabled the emergence of in silico experimentation across many domains, where research is conducted via computer simulations with models closely reflecting the real world.

In silico experimentation provides researchers with a number of significant advantages:

higher precision and better quality of experimental data;

better support for data-intensive research and access to vast sets of

experimental data generated by scientific communities;

more accurate simulations through more sophisticated models;

faster individual experiments;

higher work productivity.

In silico experimentation nowadays suffers from an increased complexity of setting up, maintaining and making changes to the experimental simulation systems. Such systems often involve a range of heterogeneous components: modules for preparation, extraction and conversion of data, program codes that perform experiment-related computations, and scripts that join the other components and make them work as a coherent system which is capable of displaying desired behaviour. Interaction with such a system involves a great amount of purely computing aspects.

A regular researcher (for example, a biologist or chemist) may not have enough background knowledge to configure and tune the system to his needs. Insufficient computing background – which is natural for scientists as it is not the focus of their work – acts as a strong barrier in adoption and distribution of scientific applications, thus leaving these applications inaccessible for the majority of researchers.

Scientific workflows offer a solution to this problem. They provide an easy-to-use declarative way of specifying the tasks that have to be performed during a specific in silico experiment, whereas the technical details of workflow execution are now delegated to a Workflow Management System.