Previously I have used subversion for all my VCS requirements, It has proved very useful over the years, saving my ass a few times, and generally just easy to deal with. But as Im doing quite a bit of experimental hacking on gobject-introspection, the concept of keeping a huge patch file up-to-date was becoming a serious pain, along with making it impossible to submit to the maintainers.

So given that my understanding of Git, was that it was designed to solve this specific problem (you working on a experimental version of a public codebase). I though I would investigate further. Coming from a CVS/SVN background, my first thought was that I should ask the maintainers to let me create a branch on the public git for this. - This is normally what you would have to do if you where using the centralized CVS/SVN repositories. But the more I read and understood, it became clear that rather than having a centralized branch, you just have to set up your own public git repository copy (and in my case put it on my server, as I work from a multitude of workstations and a laptop). Then basically you can hack away at your hearts content. making as much mess as you want, and when you are ready, tidy up your own commits into a series of 'proposal' commits and suggest they pull those change from your server. Meanwhile you can keep your changes intact, while still syncing with updates they do. An absolute masterpiece in design there...

The only part I've found is lacking is that the generation of this 'quality patch' is a bit clunky. Basically after a while, you find that you need to merge a number of commits from your own repository to generate this patch. I've got about halfway through writing a little Gui Patch creator by reordering and merging commits together, as I could not find a tool that made this as simple as

"Select Branch point, Join/break apart patches, re-order and then create a branch with a series of tidy patches with nice commit message."

(well that's for another post...)

Anyway the second way I use subversion at present is basically webdav mounted on all my workstations, when I save the file, it is written via webdav to subversion. which in turn using some simple hooks writes it to a live area, where I can test stuff (eg. websites etc.)

This method of working is extremely efficient, and pretty safe (I can undo deletes, etc.), it also provides a handy way of cost estimation for billing (by looking at the commit times etc.). The only serious downside is that it's highly dependent on my internet or wifi connections being stable and fast. The later quite often being a serious problem, especially if the editor wants to open all the files in the project to determine auto-completion data. Essentially I've had to turn off some of these features for the sake of usability.

I had being wondering about an alternative for a while, initially thinking of a similar webdav based arrangement. However after toying with git for quite a while I finally worked out a far better solution. So here it goes.

On the server side - I have git running with the new simple http cgi (as webdav based git DOES NOT WORK!) - however much the document hints - you can not commit to a webdav mounted git repository, as it will never run any of the necessary hooks. The simple http cgi is also a bit flakey if you do not have git 1.7 on both ends.

This has a very simple post-update hook that basically updates a live copy of the repository (in my case /home/gitlive/{repository} ) Thanks to http://joemaller.com/2008/11/25/a-web-focused-git-workflow/

#!/bin/sh

cd /home/git/myrepository || exit

unset GIT_DIR

git pull





Now on the desktop end, is where the clever stuff comes in. I create a directory ~/gitlive and checkout a working directory in there

git clone http://mysite/myproject

I then download this little script

curl http://git.akbkhome.com/?p=gitlive;a=blob_plain;f=gitlive.js;hb=HEAD > gitlive.js

then run it by entering on the desktop F2(run)



seed ~/gitlive/gitlive.js

(run it from the terminal to see what's going on if nothing works.)

What this does is

a) use inotify via g_file_monitor to watch everything in ~/gitlive

b) uses the Desktop Notification API to tell you what its watching

c) put's a little icon in the system tray (so you can quit it)

d) every time it sees any change in the files system, it commits that change (adding moving etc.)

e) pushes it to the live server (you probably need .netrc setting up correctly if you use http authentication)

f) uses the Desktop Notification API to let you know it's committed/sent the file.

And obviously on the server site, this get's checked out into the live directory

Now that all the files are local, I can finally have all the benefits of project auto-completion in my editors without the latency on saves and loads.

Try out the gitlive.js script for yourself and send the url of any patches you make... It's a good example of the power of gobject introspection with seed, illustrating how to async spawn processes and monitor file systems.