After our breakthrough with S3X which lets you turn any S3 application into an IPFS application with a single line change to your code base and no change in application design. We decided to kick thing up a notch with our nodes to ensure support and performance for big data IPFS work loads.

TemporalX nodes ship with a consensus-less built-in replication protocol that can be enabled via the configuration file. (Benchmarks Below)

Many replication systems use some form of consensus, whether its CRDT, Raft, Paxos, or a custom protocol. The downside to consensus-based replication systems is that bugs in the consensus can occur. With Raft split-brain scenarios are possible, and you can get issues where if the number of servers part of the consensus group go below 50%, and a node needs to restart, you’ll be unable to start the consensus engine. It’s why projects like IPFS Cluster moved to CRDT (there’s also some other reasons, but those are not applicable to this discussion). However, CRDT based consensus systems aren’t problem-free they can be slow, functionality is extremely sensitive to network conditions and have issues with compaction and space consumption.

The whole reason for TemporalX’s existence is needing to support production scale IPFS workloads, and provide production scale performance, which means we want to remove possible problems like those that occur with Raft, and that the speed + size concerns for CRDT consensus are undesirable.

The solution was quite simple and is done with a replication protocol using gRPC as the transport layer, LibP2P PeerIDs to be be used for TLS encryption, and client-based authentication. This allows us to reuse as much of our existing code and not have to deal with third-party libraries, or maintaining a new codebase.

Replication in TemporalX is controlled by a replication file. Allowing us to specify things like:

The “author” (identity responsible for signing the replication files),

The servers to replicate to

Their corresponding PeerIDs to derive the certificates used for TLS,

The CIDs to replicate, and the minimum number of servers to replicate with for a replication to be healthy.

The act of having signed replications provides extra security to replication updates. For example, in IPFS Cluster, to update the “pinset” (list of pins to replicate) it’s guarded by a single swarm key stored in plaintext, and available to everyone. If you have this swarm key, you have access to the entire cluster and can alter the pinset at will.

Contrast to TemporalX’s replication, which requires the use of signed updates to new replications — providing extra security to replications, so you can ensure that the incoming update is being executed by an authorized entity.

Now, this is somewhat similar to IPFS Cluster, which requires a swarm key, however, there is two big difference.

1st: The swarm key must always be available in the config file stored on disk in plaintext, which means if someone gets access to a participating server, they can join your cluster.

be available in the config file stored on disk in plaintext, which means if someone gets access to a participating server, they can join your cluster. 2nd: The swarm key is a symmetric key, where replication uses a public key. Even if a replicating server is hacked, the private key is still safe.

TemporalX allows you to broadcast your replication update to the participating nodes and then you no longer need access to that replication file until you want to broadcast another update.

TemporalX can perform replication updates under very high-security measures.

You can have firewall level restrictions enabling access to the replication port from specific IP addresses, and you don’t need to have the replication signing key (the equivalent of a swarm key) stored in plaintext outside of you broadcasting the replication update. However if plaintext is still undesirable even if it’s only exposed for a minute, you can easily write your own modules using the open-source protocol buffers to enable pulling the signing key from a secure key management server such as consul.

So lets talk performance!

It’s lightning fast, just like the rest of TemporalX. You can broadcast replication updates containing thousands of CIDs in seconds, and have it parsed and loaded in minutes. To compare performance between IPFS Cluster and TemporalX we created a 3 node environment, and ran a test against 1000 files. For TemporalX to reach convergence on the 1k pinset it took 128 seconds, while IPFS Cluster took 1020 seconds.

For the full detailed benchmarks: https://github.com/RTradeLtd/xreplb

What is TemporalX ?

For those unfamiliar with TemporalX, it is an enterprise IPFS client written from scratch designed to be stable, fast, and well-tested. TemporalX is implemented by many of our clients to handle big IPFS data demands and replication is included with every TemporalX node for no additional cost. Check out the documentation here: https://docsx.temporal.cloud

✔️ gRPC API — Golang, Python, javaScript, Java, Rust & more

✔️ Built in Data Replication

✔️ Multi-IPFS networks

✔️ Multi-Datastore

✔️ Extremely Fast

✔️ Flexible Configuration

Join Temporal’s online community on Twitter or Telegram. We also have some great IPFS tutorials on our medium.

Written by Kevin Vanstone. Follow him online @KevinVanstone for the latest from RTrade Technologies.