Software upgrades are a fact of modern life.

The typical lifecycle of centralised software looks like this:

Developer writes code locally Team reviews code before merging it into a release candidate Release is deployed to end-users by system administrators Rinse and repeat

For popular tech platforms this is happening many times per day. End users are ideally unaware of the process. The version of Facebook/Twitter that you logged in to yesterday was deleted before you even woke up this morning.

This works because the development team iterating on the code and the team administering the servers executing the code are broadly the same people. Co-ordinating deployments when everyone involved shares the same rough goals and values is relatively trivial. It’s common practise for sophisticated development teams to completely automate deployments, and much of the review process too.

In a decentralised system the developers, system administrators (e.g. miners) and end users (e.g. token holders) may have very different goals and values. Historically, in the context of a blockchain, these disagreements have often resulted in “hard forks”, weeping and gnashing of teeth.

Hard forks and soft forks are essentially the same thing in that when a cryptocurrency’s existing code is changed, an old version remains while a new version is created. However, with a soft fork, only one blockchain will remain valid as users adopt the update. Both forks create a split, but a hard fork creates two blockchains, and a soft fork is meant to result in one. — Investopedia

The need to make a clear distinction between hard and soft forks (and surrounding political drama) in a cryptocurrency comes from:

The need for constant global consensus on data

The need to maintain a scarcity and providence of coins through a single canonical history

Holochain has no global data consensus, and design goals that allow for dApps where split/duplicate/diverging/converging histories are perfectly valid. A “migration” of Holochain data could split a single chain into several new chains, each running different code moving forward, but with a shared history before some point, potentially even remerging in the future.

For example, nothing is harmed by a temperature sensor simultaneously logging “duplicate” data through different dApp versions on parallel DHTs. Such a thing would cause mass hysteria for a blockchain economy. Imagine if BTC and BCH tried to keep their ledger entries “roughly” in sync through slowly diverging political ideals, validation logic and data structures…

What is planned for Holochain?

Ultimately, the usefulness/dangers of a given fork/upgrade process is contextual for each specific Holochain dApp.

What we plan to provide from Holochain core:

Standardised, system level data structure(s) allowing each user’s migration history to be tracked across many dApps/chains/DHTs

“Plug and play” governance models for developers to achieve common dApp lifecycle management strategies (e.g. BDFL, user opt in, temporary DHTs, etc.)

A feature set for data management (e.g. copying, referencing, etc.) that is symbiotic with existing bridging mechanisms

The migration process for each dApp will need to be specified and planned ahead of time by developers.

The only way to change the migration process for a live dApp is by migrating users to a new dApp.

Developers will need to plan carefully to avoid locking themselves into a “chicken or egg” situation, or a dead end with no way out of a bugged dApp.

Overall we have a more flexible system than consensus-based networks, but accept that it’s not quite as straightforward as the typical centralised approach either. We will continue to refine our tooling and recommendations over time based on community feedback.

Where are we at today?

tl;dr: There’s lots of work to be done, but no big red flags have surfaced yet.

We have laid the foundations for future systems, but the current processes, documentation and examples are still very young. For example, the core entry definitions only landed in the develop branch over the last week or so.

Migration entries currently consist of 4 properties:

The migration type (open or close)

The other dApp’s hash (source or destination)

The public key of the migrating user

An arbitrary string for any extra data that might be useful to record

Migrations are triggered from zome functions by migrate , for example:

function doMigration(params) {

migrate(params.type, params.DNAHash, App.Key.Hash, params.data)

}

Migrates the current user to/from the DNAHash provided in params .

Note that migrate must be called separately in both the source and destination dApps to establish a bidirectional relationship.

Under the hood migrate follows the standard commit and share workflow, so all migrate entries are broadcast publicly to the DHT and run all standard validation hooks.

Holochain does some basic data type validation, but essentially that’s it!

Code examples!

I’ve created a very basic example with two skeleton dApps and a single, shared migration zome (symlinked into place).

The example has no UI but if we run hcdev web in each of the dApps simultaneously then we’ll get the source/destination hashes output to the console, allowing us to cross reference each.

The alpha version of holochain requires that each dApp is run in its own Holochain instance. This is why we need to run hcdev twice, across two separate ports below. This limitation should be lifted as Holochain core matures. For now though, there is some manual juggling required to run multiple dApp versions side by side and demonstrate a migration.

The output from dApp 1.1 on my local, note that the 1.1 dApp hash is visible:

$ hcdev web Copying chain to: /Users/davidmeister/.holochaindev Serving holochain with DNA hash:QmNnQyKoJj6QdU7ZUczdTctEExXvN8DsHTXemaGHys7PLo on port:4141

and the same thing from 1.2:

$ hcdev web 3141 Copying chain to: /Users/davidmeister/.holochaindev Serving holochain with DNA hash:QmZhnaC4sK7P9J52e7oZQkjTxKtBhrbPucimWRbmXUb6PN on port:3141

Now I can POST the hash of 1.2 to the doMigration function in 1.1 (see the example repository for the doMigration zome code) for a close migrate entry:

$ curl --header "Content-Type: application/json" \

--request POST \

--data '{"type": "close","DNAHash":"QmZhnaC4sK7P9J52e7oZQkjTxKtBhrbPucimWRbmXUb6PN","Data":"Hello Migrate!"}' \

http://localhost:4141/fn/migrateZome/doMigration "QmarXpbndQLpkbYZJ7tLdbBDLvVHA3LTMSYMKpgEJb5rtQ"

And the inverse with an open entry on the 1.2 side, using the 1.1 hash to reference the source dApp:

$ curl --header "Content-Type: application/json" \

--request POST \

--data '{"type": "open","DNAHash":"QmNnQyKoJj6QdU7ZUczdTctEExXvN8DsHTXemaGHys7PLo","Data":"Hello Migrate!"}' \

http://localhost:3141/fn/migrateZome/doMigration "QmVTK1ZziwUPU8CuKEfjF5Qnw7u5rhpvysJrvyyUVWSY1M"

Now we can POST to get the 1.1 migrate closing entry back:

curl --header "Content-Type: application/json" \

--request POST \

--data '{"hash": "QmarXpbndQLpkbYZJ7tLdbBDLvVHA3LTMSYMKpgEJb5}' \

http://localhost:4141/fn/migrateZome/getMigrateEntry {"Entry":{"DNAHash":"QmZhnaC4sK7P9J52e7oZQkjTxKtBhrbPucimWRbmXUb6PN","Data":"","Key":"QmaSscTWDqNLZ8MnXoSpCGTc4Qvs2p7Zpoo78HisFKrP8b","Type":"close"},"EntryType":"%migrate","Sources":["QmaSscTWDqNLZ8MnXoSpCGTc4Qvs2p7Zpoo78HisFKrP8b"]}

And the 1.2 migrate opening entry:

curl --header "Content-Type: application/json" \

--request POST \

--data '{"hash": "QmVTK1ZziwUPU8CuKEfjF5Qnw7u5rhpvysJrvyyUVWSY1M"}' \

http://localhost:3141/fn/migrateZome/getMigrateEntry {"Entry":{"DNAHash":"QmNnQyKoJj6QdU7ZUczdTctEExXvN8DsHTXemaGHys7PLo","Data":"","Key":"QmaSscTWDqNLZ8MnXoSpCGTc4Qvs2p7Zpoo78HisFKrP8b","Type":"open"},"EntryType":"%migrate","Sources":["QmaSscTWDqNLZ8MnXoSpCGTc4Qvs2p7Zpoo78HisFKrP8b"]}

From here a real world application would incorporate the existence of either of these migrate entries into validation/business logic, and use the dApp reference hashes to co-ordinate cross-dApp data/processes.

Happy hacking!