Velocity

Changing When People Arrive

There’s an important detail I glossed over in the last example: the fact that 5 people show up each week. This timing is critical.

Specifically, what matters is how many people want to use StampCoins at the same time. The level of simultaneous demand is important because all the coins get distributed across that demand. If the same total people show up over the course of the year, but in differently sized groups, we get different results.

We can see that by adjusting the timing in our example. Let’s say 20 people show up once every 4 weeks instead of 5 people each week. We still get 260 total deliveries, but it’s 20 people × 13 periods instead of 5 people × 52 periods. Same deliveries, different timing.

Now the StampCoins get distributed differently. When 20 people show up, they want those same 1,000 StampCoins, so each person gets 1000 / 20 = 50 coins. Since they’re still only willing to pay $4 for those 50 coins, the value per StampCoin becomes 4 / 50 = $0.08. All the StampCoins are still used, just in smaller amounts and with a value that adjusts upward to meet the new demand.

Going back to MV = PQ, we have a P of 50 StampCoins per delivery, and the same Q of 260 deliveries. So PQ is now 50 * 260 = 13,000 StampCoin uses. With M of 1000, we get V = 13000 / 1000 = 13 times each StampCoin is used.

In terms of the equation of exchange, M (1000) × V (13) = P (50) × Q (260)

Velocity as a Function of Timing

What we’re seeing is the timing of demand determines the velocity. Smaller, more frequent groups of people speed up the velocity, whereas larger, less frequent groups slow it down. Intuitively this is because when there are more groups of people that don’t overlap with each other, we can reuse the StampCoins more often.

There are two ways to quantify this. The first is simultaneous demand — how many people on average use the coins at the same time. The second is how long the average interval is between each usage. When there are many consecutive groups of people, both the simultaneous demand and the average time between each usage are small. But if everyone shows up at the same time once a year, the simultaneous demand is literally everyone and the time between each usage is a full year. To quantify these two relationships:

Let’s apply these relationships to our two examples. For both, the total activities (Q) is 260 deliveries and the total time period is 52 weeks. For the first example with 5 people each week, we get a velocity of 260 / 5 = 52 from the left side above and the same value of 52 / 1 = 52 from the right side. And for the second example with 20 people every 4 weeks, we get 260 / 20 = 13 from the left side and the same value of 52 / 4 = 13 from the right.

Applying Queuing Theory

These relationships introduce the idea of the time between activities. If we can estimate that interval, then we can estimate the velocity. Fortunately, there’s a whole field of study from operations management that can help: queuing theory.

Queuing theory is the mathematical study of waiting in lines. Using the rate at which people arrive (λ) and the rate at which a server can provide the service (µ), it offers formulas for how much time someone spends in the system on average. To apply this to velocity, we can treat our StampCoin example as a queue. People show up to use StampCoins for deliveries, and then servers provide a delivery service.

An important input in queuing theory is the number of servers. With limited servers, it’s possible to show up when they’re all busy, forcing you to wait in line. As you add servers, the average waiting time decreases. And at infinite servers, it’s called an “M/M/∞ queue” and queueing theory proves that the waiting time converges to zero. In equation terms, the average time someone spends in an M/M/∞ queue is 1/μ, which equals the service time. For example, when the server rate is 30 per hour (2 minutes per service), the average time someone spends when there are infinite servers is 1/μ = 1 / 30 = 0.033 hrs, or 2 minutes, the service time.

With cryptoassets, we basically have infinite servers since you don’t really need to wait in line to start a crypto transaction. So we’ll use the M/M/∞ queue equations going forward. For velocity, we want the average time between activities, and we now know the average time in the system is the service time, 1/μ. This means that if all the activities were to happen back-to-back — that is, there were zero gaps between them — then the average time between activities would equal the service time. That would give us a velocity of 1 / (1/μ) = μ.

However, we can’t assume we’ll have zero gaps. No matter how high the demand is on average, there’s always some chance that for a moment (or longer), there could be no demand and the system could be idle. That would then increase the average time between activities and decrease the velocity.

We therefore need to refine our velocity equation to account for those gaps when the system is idle. To do that we can subtract the percent idle time from the numerator, total time. Combining that with our μ value above, we get a refined formula:

Idle Time

The last step is estimating the percent idle time. Another way of phrasing this is the probability that zero people are in the system. Conveniently, queuing theory derives this value, π(n), which is defined as the probability that there are n people in the system once reaching a steady state. The mathematical proofs get complex (here’s a good resource), but we can skip to the final M/M/∞ equation:

The percent idle time is when there are zero people, π(0). Plugging in 0 for n, we get:

This is pretty abstract, so let’s apply some real numbers. Let’s say the arrival and server rates are equal, e.g. people show up 10 times per hour and the servers can also serve them 10 times per hour. In this case, the percent idle time is e^-(10/10) = 0.368, or 36.8%. This means that when the arrival and server rates are equal, with infinite servers, the whole system is expected to be idle 36.8% of the time. If the arrival rate is double the server rate, than the idle time is e^-2 = 0.135, or 13.5%. And at triple the server rate, we get e^-3 = 0.050, or 5.0%. As the arrival rate increases relative to the server rate, the percent idle time converges toward zero.

With this last piece we can update our velocity equation. In this context, the arrival rate, λ, is the number of activities per year, Q. Putting it all together, we get this equation:

Back to StampCoins

To see our new equation in practice, let’s go back to our StampCoin example. We still have 260 total deliveries, but this time let’s say the timing is random over the course of the year (which the above equation assumes). Let’s also assume we now have infinite delivery drivers — meaning no one ever waits in line — and that on average each delivery takes 3 hours.

For comparison purposes, let’s first consider the maximum possible velocity, when the idle time is at a minimum. This happens if none of the deliveries ever overlap, allowing us to reuse all of the StampCoins every time and thereby get a velocity of 260. Out of 24 * 365 = 8,760 total hours per year, we would use the coins for 260 * 3 = 780 hours worth of deliveries. That results in 8760–780 = 7,980 hours of idle time, or a percent idle time of 7980 / 8760 = 91.10%.

That’s for the maximum possible velocity, but we know that deliveries might overlap and thereby decrease the velocity. This is where our new equation comes in. At 3 hours per delivery, our μ is 8760 / 3 = 2,920 potential deliveries one server can do per year. With our Q of 260, we get a percent idle time of e^-(260/2920) = 0.9148, or 91.48%. It’s slightly higher than the minimum possible 91.10%, accounting for the probability of deliveries overlapping. Accordingly, the lower velocity will be 2920 * (1 — e^-(260/2920)) = 248.8.

To see how the numbers change, consider a new example where deliveries take a full day instead of 3 hours. It’s still technically possible to get that 260 velocity, but it’s much less likely because the deliveries would need to line up almost perfectly. There’s a higher probability that deliveries overlap, so the velocity should be lower. Using our equation, our new μ is 365 potential deliveries per year and our new velocity is 365 * (1-e^-(260/365)) = 186.0. As expected, the velocity is much lower, reflecting the higher probability of deliveries overlapping.