Remaining Questions about HAppS

This is a list of questions I need answered about HAppS’ MACID monad before I can decide to use it for a large-scale application. It’s a long list, but I don’t mean to imply that the answers are bad; indeed, I’m hopeful they will turn out okay. I’m making the list in the hopes someone can point me in the right directions, as just reading the source code is slow going so far!

Is it feasible (even by extending the HAppS framework, if needed) to load balance a HAppS application across multiple servers? I’ve got performance problems already with the old version of this application; I’m hopeful that many of them can be solved by dropping this Java object-relational mapper – which generates SQL that I never see – with Haskell code for which I can use smart data structures and algorithms and fix the problems. Still, the hope is that more people use this in the future, and I’d hate to permanently give up the option of clustering or load balancing. Although it wouldn’t be immediately needed, it remains worthwhile to have the choice.

Is it feasible (even by extending the HAppS framework, if needed) to have HAppS store only part of its data at a time in memory? While servers are getting more and more powerful over time, applications are using more and more data as well. The data backup files for the application I’ve currently porting to Haskell are 9 GB in size, after compression. I’m hoping that this increases over time. While most of this data could potentially be moved into auxiliary files outside of the HAppS MACID monad, it’s still a little questionable whether I’d want to limit the data to the size of the server’s RAM. The web site says that everything has to fit in RAM; but I don’t know if that’s because of something fundamental about the way the system works, or just because that’s what is implemented today. The talk of moving to S3 and EC2 seems to indicate the latter (more promising) state of affairs.

Is it feasible (even by extending the HAppS framework, if needed) to tie HAppS’ MACID transactions to transactions in external information systems via a two-phase commit protocol? One of the key goals of the software I’m developing is integrating with other systems seamlessly, and that likely means allowing changes to HAppS-controlled data to participate in a distributed transaction.

When I change the data structures for a HAppS application, it fails to load with the old state. Is there an easy technique for migrating data when one needs to change the data tracked by the application?

What is the best way to manage (i.e., interactively query and update) live data from a HAppS application outside of the pre-planned application functions? For example, if I’m asked by my boss how many people with a first name of Tiffany have used the application after midnight since 17 days ago, I can currently throw together some SQL and find out. Now, can I throw together some Haskell and find out? How would I do that? The only thing that comes to mind is something like lambdabot, which uses hs-plugins to compile and then dynamically load some code and run it. I can do this, but it’s a little odd and it seems like there’s be an easy answer to this.