If using Haskell as a library being called from my C program, what is the performance impact of making calls in to it? For instance if I have a problem world data set of say 20kB of data, and I want to run something like:

// Go through my 1000 actors and have them make a decision based on // HaskellCode() function, which is compiled Haskell I'm accessing through // the FFI. As an argument, send in the SAME 20kB of data to EACH of these // function calls, and some actor specific data // The 20kB constant data defines the environment and the actor specific // data could be their personality or state for(i = 0; i < 1000; i++) actor[i].decision = HaskellCode(20kB of data here, actor[i].personality);

What's going to happen here - is it going to be possible for me to keep that 20kB of data as a global immutable reference somewhere that is accessed by the Haskell code, or must I create a copy of that data each time through?

The concern is that this data could be larger, much larger - I also hope to write algorithms that act on much larger sets of data, using the same pattern of immutable data being used by several calls of the Haskell code.

Also, I'd like to parallelize this, like a dispatch_apply() GCD or Parallel.ForEach(..) C#. My rationale for parallelization outside of Haskell is that I know I will always be operating on many separate function calls i.e. 1000 actors, so using fine-grained parallelization inside Haskell function is no better than managing it at the C level. Is running FFI Haskell instances 'Thread Safe' and how do I achieve this - do I need to initialize a Haskell instance every time I kick off a parallel run? (Seems slow if I must..) How do I achieve this with good performance?