Tierion recently raised $25 million USD in an Initial Coin Offering (ICO) for their Tierion Network, advertised as a “universal platform for data verification”; a less bombastic description could be to say that the Tierion Network supports a timestamping service, Chainpoint.

Unfortunately Tierion’s marketing efforts to investors contained a number of materially misleading and inaccurate claims about the Tierion Network and Chainpoint technology. In this article we’ll focus on their claims that the Tierion Network can generate proofs with very little latency, and their claims of high accuracy timestamps.

We’ll show that:

Tierion’s claimed “immediate” response time is extremely misleading. Tierion’s actual response latency is an order of magnitude higher than its competitors. Tierion’s claims of high-accuracy NTP timestamping are cryptographic nonsense.

Note that these issues are far from my only concerns about Tierion. The ICO itself deserves its own article, particularly from the point of view of an investor in the token.

Going forward, Tierion needs to frankly explain to their investors and the public how these inaccurate claims happened and what Tierion is going to do about them. After all, they’ve just accepted $25 million of other peoples’ money in a token sale of dubious legality - they need to tread carefully.

Update: Tierion has responded to this post with the following:

Disclaimer

I’m the founder of the OpenTimestamps project, which aims to create free as in beer and speech trust-minimized timestamping infrastructure. While OpenTimestamps itself is a non-profit venture, it’s still a direct competitor to the Tierion Network.

Latency Claims

The Tierion Token Sale Whitepaper includes this claim as one of the main features of Tierion/Chainpoint, stating that (emphasis mine):

Chainpoint responds immediately when a hash is submitted.

Similarly, Tierion advisor Zaki Manian mentioned a latency advantage multiple times in an article comparing Tierion to competing systems that was promoted as part of the ICO marketing effort. For example, this comparison of Factom to the Tierion Network:

TNT makes the right trade off. Leverage an existing chain to host tokens and optimize the network for speed, latency and reliability.

Their Vice President of Engineering, Glenn Rempe, has furthered these claims with specific performance figures, stating that:

99th percentile latency for those 1000 [Chainpoint timestamp] requests was 91ms, 50th percentile was only 41ms.

and

Our new systems are not only faster per request [than competing systems], but very wide in their ability to handle concurrent requests at scale.

What Rempe is referring to in the above claims is the time it takes a Tierion Network node to reply to a Tierion client with a RFC 4122 Version 1 UUID (also referred to as a hash_id ) in response to the submission of a hash digest. In his words:

Each of those POST requests returned its own unique UUID with an embedded timestamp.

Is a UUID a timestamp? Well, let’s look at the UUID in the example Chainpoint proof from the token sale whitepaper:

$ uuid -d 8853b190-6061-11e7-9322-45354847e629 encode: STR: 8853b190-6061-11e7-9322-45354847e629 SIV: 181209569491190109107197391533590963753 decode: variant: DCE 1.1, ISO/IEC 11578:1996 version: 1 (time and node based) content: time: 2017-07-04 02:36:07.337000.0 UTC clock: 4898 (usually random) node: 45:35:48:47:e6:29 (global multicast)

A timestamp is a proof that some message (the data) existed prior to some time ; timestamps prevent attackers from making the false claim that a message , created after time , was in fact created prior to . Critically, a timestamp allows Alice to convince Bob that her message existed prior to time .

In no way shape or form is a UUID a timestamp.

The time field in a UUID simply isn’t authenticated in any way, and can be modified undetectably at will. Secondly, the entity that set that time field is an untrusted, pseudoanonymous, Tierion Network node, with no good mechanism to guarantee that it will be accurate.

Unfortunately, the Tierion team appear to have a misunderstanding of what a timestamp is. I first noticed this issue when looking at Tierion’s claims of NTP timestamping (more on that later). I reached out to Tierion multiple times in private prior to the ICO to try to get clarification on this issue, and got no response… Until I opened a GitHub issue about it, asking:

The Tierion Token Sale Whitepaper’s Chainpoint section claims this happens via the hash_id field, but it’s unclear how this data is actually validated. How exactly does hash_id prevent invalid timestamps?

This tactic apparently forced their Vice President of Engineering, Glenn Rempe, to finally reply by virtue of the fact that GitHub issues are public (though Rempe immediately locked the issue after replying to ensure I couldn’t respond). And reply he did, stating among other things that (emphasis his):

The premise of your question is biased. The statement that “hash_id prevent invalid timestamps” doesn’t really make a whole lot of sense. Our hash_id fields contain a version 1 UUID. Those UUIDs are just a container for an embedded timestamp provided by the system clock. The hash_id cannot prevent invalid timestamps as it is itself a timestamp. What we have said in the white paper is the following: Chainpoint solves this dilemma by including multiple trusted timestamps and multiple trust anchors in each proof. This allows Chainpoint proofs to simultaneously possess accurate and trustless time attestations.

In fact, with regard to trust Rempe goes on to say that, emphasis his:

Clients who want to use the NTP derived timestamp can easily ensure that the time value included in the hash_id UUID is in fact accurate at the very moment of hash submission to us. All they have to do is extract the NTP server time from the UUID we give back to them in the POST response. Since our servers and Nodes are extremely fast in returning a UUID for a hash (single digit milliseconds are often observed) the client can trivially extract the time value out of the UUID at the moment they receive it and compare it to their own trusted local source of time for corroboration.

This completely misses the point of timestamping: I timestamp my data to convince you that it existed in the past; I’m not the one that cares whether or not the timestamp is valid, you are.

I suspect part of the problem here is that the Tierion team have never properly analysed - or even thought about - what actual attacks the Tierion Network is supposed to be preventing. Notably this is missing from all the Tierion documentation that I’ve seen.

Steel Manning: What if the UUID was just a handle?

While I could stop here, let’s instead steel man the Tierion design: What if Tierion fixed their protocol by making the hash_id be only a handle for retrieving a timestamp? After all, Tierion does claim that one of the functions of the hash_id is as a handle for retrieving a future proof; let’s improve the protocol by making it the only function.

Unfortunately, this still doesn’t constitute a useful timestamp. The purpose of a timestamp is to be able to verify that a message existed in a past; obviously that verification will only happen in the future. With one hash_id per request - potentially millions per second - it’s obviously not possible to store every hash_id and associated digest indefinitely. Thus at some point in the future it’ll be deleted, making it impossible to verify.

Again, Tierion’s latency claims are shown to be materially misleading.

Calendars

If you’ve ever used OpenTimestamps, you’d know it can generate a timestamp proof in about a second. These proofs can be verified against the Bitcoin blockchain indefinitely far into the future. As everyone knows, Bitcoin blocks are generated on average once every 10 minutes - how can OpenTimestamps achieve ~1s latency? Pending attestations and calendars.

When you submit a timestamp request to an OpenTimestamps calendar server, every other request made in the same 1 second interval is aggregated into a single merkle tree. The tip of the tree - the per-interval commitment - is saved indefinitely in a calendar, and each client gets back a proof built from their part of the merkle tree. Here’s an example:

File sha256 hash: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 Timestamp: append b835af0f1d66559786e7c73fb9d6c6bd sha256 -> append bfec614152361e015fd83918d18fffaf sha256 prepend 033fd041a10693ce9fb68a35bc0da46b8138045b1e612880533cb168922d71d6 sha256 append 07f40e73a0a4fdfe8ac8025fa55d7f280410394f58e07f93fef9fca575364fe0 sha256 prepend 598d2e51 append eb3cc0595c7f7a6a verify PendingAttestation('https://finney.calendar.eternitywall.com') -> append 6d193b7c9d26417e72f23ff1839f9c64 sha256 append 2b975fae293d77a9afdf13c3f60140e1263140c715d0bbf01d672fad92c57898 sha256 append a824c6d057e5380975cac1bf4420ae9b04814704900b3511b52af24c706082b4 sha256 prepend 598d2e51 append 7571324451b55403 verify PendingAttestation('https://alice.btc.calendar.opentimestamps.org')

Each PendingAttestation is a promise from a calendar server to do two things:

Eventually timestamp the per-interval commitment with Bitcoin. Make that timestamp proof available publicly indefinitely.

While a PendingAttestation isn’t technically a timestamp proof, the promises made by the calendar servers ensure that when we need to verify the timestamp in the future, we’ll be able to do so. Secondly, because the rate at which per-interval commitments are generated is fixed, the calendar system scales horizontally to an arbitrarily large number of timestamps per second by simply adding additional layers of aggregation.

As the Tierion Network is also built around this basic calendar concept - and copied the OpenTimestamps commitment operations proof design - let’s steel man the Tierion design again by discarding the hash_id concept, and using only calendar commitments.

Reliability

While OpenTimestamps uses a strategy of multiple redundant calendars to achieve fault tolerance, Tierion uses a single calendar maintained by a centralized set of “core” nodes. Fault tolerance and consensus is achieved via traditional leader election:

The Calendar is a blockchain that is kept in consensus between multiple Chainpoint Servers. This ensures that a single global calendar blockchain can be used to verify Chainpoint proofs. Calendar data is organized into blocks. These blocks are stored as records in a distributed cluster of CockroachDB databases. Writes to the calendar are enforced by a leader election using a cluster of Consul servers.

Since Tierion’s calendar is a consensus system, it can only tolerate the failure of nodes; the OpenTimestamps approach only requires a single calendar to operate, and thus can tolerate the failure of up to nodes, a significant increase in reliability, while at the same time being a significant decrease in architectural and administrative complexity.

So far Tierion has not announced any Tierion Network use-cases that actually require consensus; timestamping does not. It’s quite possible this is an example of “ICO engineering”: technically unsound choices made to suit the needs of the ICO token rather than the actual use-case.

Calendar Size/Latency Tradeoff

The fundamental tradeoff in calendar design is between latency and calendar size. Since calendars systems are central points of failure, we want to ensure that the wider community can easily mirror calendar data to allow timestamps to be verified independently of the calendar operators. But the minimum rate at which that data set grows is dependent on the aggregation interval, which in turn sets the latency for digest submission requests.

A key part of the sales pitch for the Tierion Network Token (TNT) is that TNT will be used to pay node operators, and that the cost of running nodes will be low. Emphasis mine:

Node Operators maintain a copy of a blockchain created by Core. This blockchain is called the Chainpoint Calendar. It contains data needed to verify any Chainpoint proof created by any node on the Tierion Network. The Calendar grows at a rate of approximately 4GB per year regardless of the number of proofs generated. This keeps the cost of node operations low.

Unfortunately, this 4GB claim simply can’t support Tierion’s latency claims. Even with the very unrealistic assumption that the only thing in the calendar are digests we get:

In fact, what Tierion actually claimed in a Q&A was that:

Our calendar is anchored internally every ten seconds.

A ten second calendar interval is a realistic figure, including non-digest calendar data. But it’s implies a true request latency that’s 10x higher than the ~1 second achieved by OpenTimestamps.

Remarkably, in spite of all the above this is what Manian had to say when comparing OpenTimestamps to Tierion:

[OpenTimestamps] latency seems unacceptably high. This is largely a result of not having a high availability server infrastructure that maintains a calendar chain.

It was actually Manian’s article that alerted me that something was very wrong with how Tierion’s was marketing their technology to investors. Having created OpenTimestamps I knew that the latency of it was dominated by the fundamental calendar latency, and that no amount of “high availability server infrastructure” could change that.

Unfortunately, repeated attempts to reach out to Manian and others on the Tierion team for clarification/correction prior to the ICO were ignored, resulting in these materially incorrect claims being used in the ICO investor marketing effort (I was able to confirm that the Tierion team were aware of my attempts to reach out to them).

While we’ve shown already that Tierion’s high-accuracy UUIDs are cryptographic nonsense, let’s again steel man the design by assuming that each calendar entry is a proper cryptographically signed trusted timestamp with disposable keys.

Tierion has made it quite clear that they’re making use of the NTP protocol as their source of time, and made claims of high accuracy. For example Rempe claims that:

We maintain accurate server time on using the NTP time sync service with careful selection of the upstream time service provider. Our servers are accurate within microseconds of stratum-2 NTP. Our NTP service provider maintains its own stratum-1 and stratum-0 service infrastructure. Our service provider has its own fleet of atomic clocks that provide the reference for their stratum-0.

and:

I measured the offset of one of our servers to demonstrate. It has a current offset from its peers of 9.845 µs (microseconds) (you read that correctly, µs not milliseconds).

In a Q&A they further clarified that the actual NTP service provider was Google’s public NTP service.

While these accuracy claims are apparently impressive, they’re missing the point for two main reasons:

Tierion’s calendar interval is ten seconds, six orders of magnitude less precision than the ten microsecond offset Rembe brags about above. NTP isn’t a secure protocol. NTP is trivially vulnerable to MITM attacks and NTP time data is repudiable.

What we actually need isn’t a highly accurate time source, but rather a sufficiently accurate time source that’s highly resistant to attack. Secondly we need to recognise that the root of trust of these trusted timestamps isn’t the NTP provider, but rather the entity signing them - Tierion via their Core network nodes. Getting time synchronization from an external source - Google in this case - simply adds another possible attacker to the system.

What Tierion actually should be doing here is ditching NTP entirely, and using free-running high accuracy clocks on physically secure infrastructure. For example, the Maxim DS3231 real-time-clock is readily available as a Raspberry Pi accessory, and is accurate to a few seconds per month without any calibration over a large temperature range. Even the apparently primitive setup of a RPi + DS3231 would be sufficiently accurate for Tierion, and the fact that this setup could be made self-contained and immune to external attack makes it a much more secure option than NTP.

NIST Random Beacon

The general concept of a random beacon is a source of unpredictable nonces, with each nonce associated with a time. By committing to these nonces in a message we can prove that the message was created after a certain point in time.

Tierion claims to use the NIST Randomness Beacon, an experimental service from the National Institute of Standards and Technology that periodically creates unpredictable time, nonce tuples, signed by NIST, on a regular basis.

However it’s unclear to me that Tierion has actually implemented this feature for one simple reason: I emailed NIST and was told on July 24th that their HSM was broken. While nonces were being generated, the signatures on them are currently invalid and thus can’t be verified as authentic. Contrary to Tierion’s claims, NIST also confirmed that the nonces are generated with an off-the-shelf HSM, not specialized hardware using quantum effects. Between these two issues, I’m suspicious that Tierion’s claimed collaboration with NIST amounted to anything more than a few emails.

The NIST Random Beacon isn’t a production-grade service yet; Tierion should not be presenting it as one and would do well to build in more generic random beacon support.

That said, it’s important to note that random beacons are tricky to actually use. While Tierion is correct to state that a random beacon can prove a timestamp was created between two points in time , that proof extends to just the timestamp itself, not the data being timestamped. While this can be useful in special cases like detecting backdated Bitcoin blocks , it’s non-trivial for end-users to make use of this kind of proof successfully in their systems.