A quick primer on a few Lachesis concepts

Event Blocks & 2n/3 Consensus

Event blocks occur when three nodes share transactions. Each time nodes exchange transactions they create a new shared event block. This block contains all transactions shared by all participating nodes and is signed by the creator and the recipients.

Each node has a genesis event block. When a node receives data it will attempt to create an event block with other participants.

The node itself keeps track of specific network data, in particular the height vector (the current index of event blocks created by each node, which is also used in the cost function — explained later) and the in-degree vector (a structure that tracks edges from other event blocks to the top event block from that node).

Using In-degree vector for each node / height vector of each node we can determine a cost function result for each node.

In this particular instance, we can select two random nodes. We will explain more on the cost function soon.

Our random selection returns the green node, and the yellow node as the other parents of our event block and we initiate a synchronization event.

This procedure will continue asynchronously across all nodes as they receive data. As the event block DAG is populated we want to identify event blocks that are shared by 2n/3 of the network. These event blocks we can consider finalized and output their ordering.

To achieve this we would compare which event blocks are known by 2n/3 of the network. We can do this by testing if a previous rounds’ witness is known by the current event block. For our explanation above, the witnesses are the genesis event blocks. (We will explain more on witness selection soon).

This illustration shows three rounds — G, G+j and G+k, each denoted by the first witness event block created.

In round G+j, blue can ‘see’ round G blue, green, yellow, red and purple.

In round G+j, green can ‘see’ round G blue, green, yellow, red, and purple.

In round G+j, yellow can ‘see’ round G blue, green, yellow, red, and purple.

In round G+j, red can ‘see’ round G blue, green, yellow, red, and purple.

In round G+j, purple can ‘see’ round G blue, green, yellow, red, and blue.

In the above example, for round G+j, we would have the following:

G+j’s blue knows of G’s blue, green, yellow, red, and purple

G+j’s green knows of G’s blue, green, yellow, red, and purple

G+j’s yellow knows of G’s blue, green, yellow, red, and purple

G+j’s red knows of G’s blue, green, yellow, red, and purple

G+j’s purple knows of G’s blue, green, yellow, red, and purple

All the events in G can be finalized (ie shared by 2n/3), but not until round G+k is reached.

With these concepts discussed, let’s look at some physical limitations to vertical scalability.

Physical limitations of vertical scalability

Processing (CPU)

Memory

Network

Processing (CPU)

The most consuming part of processing is transaction validation. A single core can validate several thousand transactions per second (~2.2K). With cryptographic optimizations for GPU this is estimated to be 5x faster than CPU processing. Fully optimized GPU can validate 100K+ TPS on a single node.

Going to extremes, we consider a 72 core instance, This can give us ~158.4K TPS (~2.2K x 72 cores) on a single instance. With GPU optimizations this can reach 792K TPS.

While theoretically possible, this does not equate to a real world implementation of 792K TPS. It’d be possible if a single high-throughput node contains just as many transactions. In practice however, we can see that it is not feasible.

Memory

260 Bytes per transaction, 100KB event blocks. 393 transactions per event blocks. 300K transactions would require ~74 MB of memory.

Network Bandwidth

With a transaction size of 260 bytes, 300K transactions would require a 600 Mbps network. Theoretically possible, but not practical.

Practical Implications

The size of each event block processed by the Lachesis Consensus Algorithm (LCA) is intended to be expanded up to 100KB.

Each transaction is 260 Bytes, a single event block can include up to 393 transactions. The time it takes for each node to create an event block is 0.1 seconds, each node creates 7~10 event blocks per second. Allowing that the number of transactions requested is infinite (equally distributed across all participating nodes) and that 100 nodes are participating, each node would asynchronously and simultaneously create 7~10 event blocks per second or 3930 transactions per second (393 transactions per event block x 10 event blocks per node).

When the number of event blocks reaches 2n/3, the Lachesis protocol adds and verifies another Main Chain Block (fixed finality block that cannot be changed). If 100 nodes are available, around 700~1000 event blocks are created per second. Each stage processes approximately 700~1000 event blocks or 275,100~393,000 transactions per second.

There are quite a few technical disclaimers to these numbers.

Simulation

Node participation effects on propagation

2 nodes

500 transactions every 500 milliseconds for each node

Event blocks created every 100 milliseconds

Time: 18183

Transactions: 8375

Pending: 58

TPS: 2763.5703679260846

Data:7.778 MiB

Time To Finality: 657

2 nodes

500 transactions every 100 milliseconds for each node

Event blocks created every 100 milliseconds

Time: 14940

Transactions: 10334

Pending: 320

TPS: 4150.2008032128515

Data:9.752 MiB

Time To Finality: 621

4 nodes

500 transactions every 100 milliseconds for each node

Event blocks created every 100 milliseconds

Time: 22021

Transactions: 7257

Pending: 157

TPS: 1977.294400799237

Data:6.811 MiB

Time To Finality: 1982

We note a decrease in overall TPS, due to an increase in TTF (Time To Finality), as node participation increases, asynchronous processing can increase, but finality also increases.

8 nodes

500 transactions every 100 milliseconds for each node

Event blocks created every 100 milliseconds

Time: 19726

Transactions: 3563

Pending: 92

TPS: 1083.74733853797

Data:3.393 MiB

Time To Finality: 2860

The greater the participation of nodes, the higher the asynchronous transaction processing at a trade off of increase finality.

Graph 1: Time To Finality (ms) vs Node Participation

Graph 2: Transactions Per Second (To Finality) vs Node Participation

Graph 3: Asynchronous Transactions Per Second

We will discuss two core elements of the Lachesis Algorithm. The cost function, which achieves faster witness creation, and the flag table, which allows for faster witness selection.

Cost Function

A key difference in the Lachesis protocol is the in-height vector selection of Nodes known as the Cost Function.

To skip the explanation and see the results search for [CF001]

The following is an excerpt from our technical paper: