The most extraordinary recent development in the blockchain space has been the rise of a genuine crypto asset class.

Since October 2016, the combined crypto market cap has increased from $12bn to over $200bn, with a brief but explosive peak of almost $800bn. Concurrently, the number of cryptoassets has ballooned from dozens to thousands. As a whole, cryptoassets have low correlations with other asset classes such as equities, fixed income, commodities, and fiat currencies. Within the asset class, cryptocurrencies often exhibit (with notable and sometimes extended exceptions) low correlations with one another. This is reflective of the plethora of use cases, structures, layers, geographies, user bases, and architectures of these various projects.

A key question that is raised by this development is why these assets have value in the first place? In many ways, cryptoassets break the mold of traditional investing. There are no revenue streams, future profits, defined payoff functions, or other common fundamental factors. While we can draw insights from traditional valuation models, ultimately crypto demands a new valuation machinery. Models are increasingly sophisticated in capturing fundamental drivers, but they remain largely unproven and immature, with room for continued improvement.

This is the first of a series of articles exploring various aspects of cryptoasset valuation, particularly how they pertain to a cryptocurrency like Logos. This first article establishes an analytical framework for thinking about valuation, and subsequent articles will delve into the various levers and considerations from the perspective of a long-term investor.¹

We will focus primarily on the Equation of Exchange (EoE) model, which is the most compelling model for cryptoasset value thus far. It is, however, incomplete, and we will suggest several improvements to increase its accuracy and utility. We also touch on alternative valuation models and why they are less useful for a fundamental investor.

Background and Previous Work

Applying the EoE model to cryptoasset valuation was most notably popularized by Chris Burniske (although it has been around crypto in one form or another for many years), and I highly encourage you to read his great explanation of how the model works at a high level. We will assume familiarity with Chris’s model as a baseline for our own analysis.

This model views a cryptoasset as the currency for the network’s local economy. The network economy typically is comprised of two primary components:

Actual transfers of value (i.e. payments) between users denominated in the cryptoasset. Network fees to process transactions or services.

Cryptocurrencies (Bitcoin, Logos) capture both components, although to be useful, the network fees should be de minimis relative to transaction size. Utility tokens (Ethereum) generally are used only for the second purpose (or at least that’s how they are designed). The key distinction between the two are what transactions are denominated in the currency being modeled. The chief modeling challenge is accurately parameterizing this network economy; the rest flows through the Equation of Exchange.

As articulated by Chris, the model is the cryptoasset analog of the traditional discounted cash flow (DCF) model. At its core is the concept of net present value (NPV), which adjusts the value of a dollar in the future to the value of a dollar today. This is done by dividing the future value by some compounded expected rate of return. This discount rate is subjective, but good choices often are at the risk-free rate (roughly equivalent to the interest on a US government bond), a weighted cost of capital, or some target return called the hurdle rate.²

Both the DCF and the EoE models incorporate and are driven by future growth (whether positive or negative), which is the primary art³ involved in this type of modeling. This involves careful consideration of the total addressable market, competitive advantages, strategic positioning, and other factors.

For simplicity, we can assume that the cryptoasset does not directly generate passive income (i.e. excluding payments for genuine services, like validation). Some newer tokens have traditional security or profit-sharing features, but these cash flows are easily incorporated using traditional models, like the DCF.

Chris’s EoE model backs out a base utility value of the cryptoasset token implied by the size of the network economy and an assumed velocity at various points in time. This utility value reflects the minimum functional value of the cryptoasset to facilitate all projected transactions denominated in that asset. Utility value should not be confused with utility token — both cryptocurrencies and utility tokens have utility value.

It then discounts the utility value at the end of a pre-defined investment horizon to arrive at a present value reflecting anticipated future growth in that network economy over that period. But, as he hints at, this is not the full picture. What if investors have different horizons? And what about investors who are looking to buy the cryptoasset at that future date who anticipate future growth?

Brett Winton introduced what he calls “nth order investors” to reflect this continuous growth accounting. Even this is incomplete —why stop at just 6 orders, for example? If the numbers were somewhat different, the discounting series may not converge. Is there a way to generalize this concept to account for any scenario? We’ll circle back to this point later.

Almost all valuation proposals come with prominent disclaimers cautioning that many parameters, such as velocity and adoption rates, are very speculative. John Pfeffer’s “An (Institutional) Investor’s Take on Cryptoassets” offers some commentary on the various levers of the model and his take on what are reasonable values for them to take. Although I don’t agree with all his conclusions, I’d highly recommend reading it — it provides some great insights and and raises some key questions.

For the time being, we’ll leave those considerations for future articles and use this article to flesh out the valuation framework.

The Equation of Exchange Model

For the purpose of establishing consistent terminology for the remainder of the article, we will restate the EoE model here.

The Equation of Exchange says, for a particular economy:

M * V = P * Q

where

M = USD value of the currency underlying the transactions

= USD value of the currency underlying the transactions V = velocity of the currency

= velocity of the currency P = average price per good in USD

= average price per good in USD Q = number of goods bought/sold using the currency

Some notes: As real-world investors thinking of investing US dollars or some other fiat currency, it makes sense to denominate value in USD without loss of generality. Similarly, we divide time into years, although this analysis can be applied over any period of time.

P*Q is best thought of as a single variable representing the total USD value of the services paid for with the cryptoasset (utility component) plus the transactions denominated in the cryptoasset (payment component). For a cryptocurrency asset, without loss of generality, we can think of these two components as subsumed into a single Total Economy Value (TEV) variable. This reflects how current payments markets are counted in the real world, where the cost of transacting is included in the overall value of the transaction.⁴ Unlike in an equity DCF, where revenue is recurring, Total Economy Value is a cross-sectional snapshot of the network.

Rearranging, we get

M = (P * Q) * (1 / V)

This highlights the two primary drivers of cryptoasset value: the annual value of the economy divided by the number of times the money changes hands per year.

At first pass, this statement may seem like a tautology, but it allows us to decompose an abstract concept (the value of a cryptoasset) into components that can be estimated fundamentally.

P*Q is particularly accessible, and Chris walks through how you can apply market size, adoption, and growth analysis to arrive at a reasonable range. We will elaborate on how a more traditional framework can be applied to this analysis.

Velocity is still a somewhat opaque concept, and there has been much debate on a reasonable value for V without many satisfying answers. We can help things a bit by noting that (1/V) is equivalent to average holding period of the cryptoasset, but this too is difficult to wrap your head around. We have come up with a framework that substantially clarifies V in the context of payments cryptocurrencies, but will defer discussion to a subsequent article. For now, we will assume that Chris’s estimate of V = 20 is reasonable.

Improvements on the basic model

For this article, we have put together a simplified, illustrative version of our internal valuation models. You can download the Excel model at my Github.

Our goal is to extend the basic model to one that could actually be used (with tweaking to the actual numbers) to make an investment decision.⁵ Luckily, we have a secret weapon: resident valuation modeling guru Joe Alie who cut his Excel teeth in banking and private equity.

First, some meta-parameters: This model is tailored to a currency type cryptoasset whose primary target markets are payments. Let’s call it PayLedger. Modeling a utility token follows a slightly different but analogous path. Furthermore, we are assuming a proof-of-stake consensus model (we’ll expound on the merits of proof-of-stake vs proof-of-work in another article). We can combine coins held for staking and coins hodl’d into a single category as any rational economic actor would want to get a return on their capital.

Here are the main improvements to the basic model:

Drilling down into TEV with multiple target markets

Within a single use case, there are typically many different sub-markets with varying dynamics. A payments network like PayLedger can conceivably address payments in gaming, IoT, point-of-sale, remittances, and other sectors, each with their own unique structure, end users, growth, and incumbents.

We have extended the model to drill down into each target market with as much specificity as possible. Projected market sizes and addressability factors are specified on the “TAM Assumptions” tab, which flow through to rows 57 to 64. Adoption curves, representing the percentage of each TAM the network captures, are parameterized for each market in F14:K24 using a logistic function, which flows through to rows 47 to 54. Rows 67 to 75 combine these components to arrive at an economy value per market as well as the TEV (that is, P*Q).

Separate logistic adoption curves for each target market within the payments use case

Accurately estimating TEV will be the primary skill of the cryptoasset modeler and likely will require the most thought. Expanding the TAM analysis to have more granularity allows us to have higher confidence around estimated TEV at the various points in time.

Rigorously defining various concepts of value

In order to address the problem of differing investor horizons, start times, and future growth hinted at earlier, we first need to formalize the concept of value with respect to time more rigorously.

Contemporaneous utility value (CUV) is the value for M that pops out of the equation of exchange for a particular period of time, without accounting the time value of money. Burniske calls this “current utility value”, but that name is somewhat misleading as it represents some future time rather than now. CUV is the minimum value of the network to facilitate its economy (transactions denominated in the network token). This means that it is a fundamental realization of value point, much like an M&A transaction — the value of the network cannot be below CUV at a particular time by definition. It is time agnostic: it does not account for future growth, nor does it incorporate discounting. CUV is calculated on row 80.

Present utility value (PUV) is CUV with discounting applied. That is,

PUV(T) = CUV(T) / (1 + r)^T

where r is the discounting rate (in this case, the general hurdle rate) and the CUV of interest is T years in the future. Like CUV, PUV reflects just a single period of time and does not account for future growth. So PUV is the value floor for an investor today with a specific investment horizon. PUV is calculated on row 83.

Fair token value (FTV) is the value that incorporates all possible investment start dates and horizons. As a result, it represents the maximum price a rational economic buyer would pay for a cryptoasset, that is the market clearing rate for an efficient market. At this price, a (risk neutral) investor is indifferent as whether to buy or sell a particular asset. FTV is therefore the value that we really care about for valuation.

Unfortunately, calculating FTV is non-obvious. How can we account for potentially infinite investor horizons and start dates?

Correctly calculating Fair Token Value

Brett Winton tried to address the problem of finding FTV in the article linked above by introducing the nth order investor concept. Under his model, all investors have a fixed investment horizon of 5 years.

A first order investor looks at CUV 5 years from their investment start date (today or in the future) and discounts it to get a contemporaneous PUV.

A second order investor looks at what a first order investor would pay 5 years from their investment start date and discounts that.

Generally, an investor of order n looks at what an investor of order n-1 would pay 5 years from their investment start date and discounts that.

Mathematically stated, Winton’s nth order value nV is a function of order O, start time T with constant horizon H and discount rate r can be defined recursively as:

(1) nV(O, T) = nV(O-1, T+H) / (1+r)^H; O > 1 (2) nV(O, T) = CUV(T); O = 0

Plugging (2) into (1) yields

(3) nV(O, T) = CUV(T + O*H) / (1+r)^(O*H)

Brett then defines nth order FTV at any point in time as the maximum (over order O) contemporaneous PUV value:

nFTV(T) = max_O nV(O,T)

The nth order value model captures some of the key concepts of FTV, namely that it should reflect the maximum that any investor is willing to pay, given that there will be investors in the future.

However, it still is not fully correct. It assumes a constant horizon for every investor of 5 years. What if someone is investing for 2 years (a value hedge fund) or 10 years (an early stage VC)? We might be missing a potential maximum. Also, how many orders of investors do we need to project? Brett stops at 6, but how do we know that is sufficient?

Let’s set aside the second question for now and focus on the first question. We want to incorporate all possible investor horizons. Since our model uses yearly time periods, the greatest granularity we can get on investor horizon is annual. This assumption is fine, as we can always expand the granularity of the model arbitrarily. It follows that the most visibility we can get in the nth order investor model is by setting the horizon to one unit of time, in this case a year. This is done in C114:Q129 in the model. Each nth order investor now looks at the (n-1)th order investor’s perceived value 1 year from now (H = 1), with the 0th order investor using the CUV.

We have 12 years of projections, so we can model out to an investor of order 12. In Brett’s formulation, the second order investor at time 1 discounted the first order investor 5 years from now, which in turn discounted 5 years from then. That is, for H=5:

V(2,1) = V(1,1+H)/(1+r)^H = CUV(1+2*H)/(1+r)^(2H) = CUV(11)/(1+r)^(10)

Using our new H=1 formulation, this value is calculated in our model by the 10th order investor in the first year (cell E126). This is evident by plugging O=10 and H=1 into formula (3). You can also confirm this by tracking the formula diagonally up the valuation triangle from E126 to F125 to G124 and so on. We can also see that Brett’s first order investor at time 1 is equivalent to our 5th order investor (cell E121).

So our new formulation captures all the nth order investors calculated in Brett’s model, but we have 5 times the granularity! In other words, we achieve maximum visibility into various investor horizons by setting H=1.

We can now calculate a Fair Token Value at each point in time by finding the max nth order investor value for H=1 (row 129). Plotting this FTV and CUV by year shows that they converge. This makes sense — CUV is the minimum cryptoasset value to support the contemporaneous network economy, while FTV is the maximum value that incorporates future TEV growth. As TEV growth slows (mostly from reaching max adoption), the two should converge.

CUV and FTV start with a substantial difference but converge as growth slows

But the nth order analysis shows something else. Notice that the column representing the various nth order investor values in the first year (E116:E128) match the PUVs in row 83! Why is that?

Mathematically, equation (3) says that for H=1 and T=0 (or any fixed T),

nV(O) = CUV(O) / (1+r)^(O) = PUV(O)

This means that we don’t need to go through the nth order investor calculations at all!

Instead of calculating the maximum nth order value for a particular period of time, we can simply calculate the max PUV over the modeled periods of time. Note that this equivalence is only true for H=1.

This insight is reflected in the model’s final FTV calculation in cell E85.

But there still is an issue — what if the year after the last one we have modeled has the maximum PUV, which would change our FTV? How do we know we have projected enough years for our model?

Incorporating terminal growth analysis

We turn once again to the DCF to understand how to deal with this termination problem. A DCF does not model an infinite number of years, even though there are potentially infinite future cash flows! Instead, it calculates a terminal value for all future cash flows.⁶ This assumes a constant growth rate of profits that continues in perpetuity. Mathematically, this involves summing a geometric series that grows by the ratio of the terminal growth multiple to the discount multiple: (1 + growth rate)/(1 + discount rate).

Can we apply a similar analysis to the EoE model? As shown in the previous section, what we care about is potential growth in PUV, so we introduce terminal growth analysis as a tool to answer this question.

PUV is a function of discounted CUV. CUV per token in turn depends on the TEV and the liquid token supply (LTS), the number of tokens available for transactions. TEV grows with adoption rate and target market growth, while LTS grows with inflation and changes in tokens held outside the economy (e.g. staking). Put mathematically,

(4) PUV(T) = CUV(T) / (1 + r)^T (5) CUV(T) = TEV(T) / LTS(T)

In order to assess if there could be a maximum PUV in future years that we are missing, we need to check if the rate of change in PUV is positive. That is, we have not modeled enough years if

(6) PUV(T+1) / PUV(T) > 1

Expanding PUV into its components, we get

PUV(T+1) / PUV(T) = [ CUV(T+1) / CUV(T) ] * [ 1 / (1 + r) ]

To satisfy (6), we get

CUV(T+1)/CUV(T) > (1 + r)

Plugging in (5):

TEV(T+1)/TEV(T) * LTS(T)/LTS(T+1) > (1 + r) Equivalent: TEV growth / LTS growth > (1 + r)

This makes intuitive sense: in order for PUV to increase, the growth in TEV per liquid token must outstrip the discount rate.

So how can we incorporate this into our model?

Analogous to the terminal value analysis in a DCF, we will assume a constant rate of terminal TEV growth (that is, TEV(T+1)/TEV(T) is constant for T > 12). A good estimate is the growth in TEV over the last two years modeled.

Slightly more complex is LTS growth. Without loss of generality, assume all vesting and non-economic hodl’ing has finished. This is a reasonable assumption after 12 years from the network launch. The only non-liquid use of tokens is then for staking. Stakers are locking up their tokens to earn a return on investment.

Let’s further assume that the marginal cost of verification (the service that stakers are providing) is covered by direct transaction fees, and token inflation is entirely profit. These assumptions can easily be tweaked to reflect different circumstances. By definition, the total amount staked will be determined by the ROI demanded by the marginal staker.

For our model, we assume staker ROI of 10% and token inflation of 1.5%, which corresponds to 15% of tokens staked (cells C27:D29). This is the long-term stake percentage, and the LTS is simply (total tokens) * (1 — stake percentage). Thus, LTS growth is

LTS growth = (1 + inflation *(1 — stake percentage))

Given the terminal TEV growth and LTS growth, we can easily verify our stopping condition by comparing their ratio as shown in cells C89:E92. Provided that equation (6) is not true, then we can stop!

This comparison over time is shown in the graph below. Once the blue line (growth in CUV) permanently crosses the green line (our discount rate), we do not need to model additional years. The terminal growth analysis is how we confirm that it is, in fact, permanent.

Hurdle rate vs growth in CUV per token. If CUV growth is less than the hurdle rate, PUV will decrease and FTV will not change.

The key to this terminal growth analysis is that the network has reached some equilibrium in terms of adoption, change in liquid tokens, and target market growth. It assumes that the various growth rates are constant for all subsequent periods. The terminal growth calculation is a stopping condition; as long as the result is less than the discount rate, this assumption doesn’t have any impact on our FTV estimate. Nevertheless, there should be some reasonable justification for such an equilibrium to exist before applying the terminal growth calculation.

Sensitivity analysis

A key valuation tool is sensitivity analysis. This involves running the model under various values of a particular input parameter (or multiple parameters) to see how the valuation changes. It helps the modeler understand how sensitive their results are to their assumptions.

Monetary velocity is a frequently debated model input. While we defer analysis of cryptoasset velocity to a future article in the series, we include a sensitivity table in C95:E107 with various reference points. These tables can be expanded to include additional axes with other input variables.

Future improvements

With these improvements, the model is now able to answer some key questions: How does cryptoasset value relate to target markets? How can we incorporate market growth? How can we account for discounting by various investors? How do we know when we can stop modeling additional years? And how does our result change with different assumptions?

Nevertheless, there is still certainly room for further improvement. Two particularly interesting ones come to mind.

First is allowing for a dynamic velocity. Row 78 in the model allows velocity to change from year to year, but for now, we keep it constant. But there are many reasons to think that velocity might change over time. In the early days of the network, there may be a frenzy of speculative trading activity, which inevitably results in many on-chain transactions and a high velocity that will subside over time. On the other hand, as network infrastructure improves over time and transaction friction declines commensurately, velocity may increase. There are many possible scenarios, but we don’t currently have the understanding to accurately model them.

Second is dynamic discounting. Our model currently assumes that all investors over all horizons have identical discounting rates. In reality, short-term investments are generally discounted at a lower rate than longer-term investments. Furthermore, there are differing risk appetites that would mandate a higher or lower hurdle rate. What ultimately matters is the hurdle rate of the marginal investor buying your tokens, but this set of marginal investors varies depending on the size of your investment (essentially, slippage). While interesting, we don’t really have the clarity right now to drill down into discounting, and it would greatly complicate the model math.

While these two additions as well as several others are potentially compelling, given the immature state of these markets, the model already has a high number of degrees of freedom relative to the available information. In its current state, our model seeks to strike a middle ground between overparameterized and oversimplified.

How useful is the EoE model?

Ultimately, the EoE model is a fundamental model with many moving levers. It captures an economic value that reflects not only the present reality but also future potential growth. Since potential growth currently dwarfs any present economics (which are essentially non-existent for any network), the model insights are strictly limited.

The model will always be highly inaccurate for a new network and most helpful for a mature network with real economics. In the same way, a DCF is appropriate for a mature operating business, but not for an early stage startup.

But that doesn’t mean it is useless!

In particular, the EoE model gives good visibility into the factors that drive value and the potential for various use cases. That is, at this early stage, it shows us what valuation could be rather than what it should be. By highlighting the most important levers, the model gives valuable context and helps investors ask the right questions.

For example, the model allows us to apply a critical eye to current valuations of various tokens. If the current value is substantially more than any realistic terminal utility value of a token, then it is clearly overvalued. This is likely true for many tokens that have limited (low potential TEV) or unproven (high discount rate) target markets.

It is somewhat harder to say that an early-stage network is undervalued. Nevertheless, it does give insight into which use cases are most compelling. In particular, it is one of the reasons we, at Logos, are excited about payments. As highlighted in our overview of the space, payments is a massive but inefficient market where DLT (when implemented properly) could see significant adoption.

Importantly, payments can plausibly involve transactions directly denominated in the network token, which enormously increases potential volumes compared to a pure utility use case where only the transaction fees are denominated in the token.

A use case that doesn’t fare as well under this model is decentralized computation, which has generated the most excitement and attracted most new projects in the space. Smart contract platforms have virtually no real world volumes presently and relatively dim valuation prospects since they have pure utility functionality and are at a significant disadvantage for more promising specialized use cases like payments. While payments may be less exciting technologically in the eyes of some (not us!), it is certainly more compelling from a potential valuation perspective.

Despite its limitations, the EoE model is the best framework we have to think about valuation. With the base model in place, in future articles we will explore the various inputs to further build an understanding of the fundamental drivers of cryptoasset valuation.

Alternative models

Before moving on, it is worth considering the alternative cryptoasset valuation models that have been proposed and comment on why they are not as useful as the EoE model.

Metcalfe Model

The most commonly proposed alternative model is Metcalfe’s Law, which states that network value is proportional to the number of unique possible connections between n nodes, n*(n-1)/2. The common example is fax machines: a single fax machine is useless, but its value increases as additional fax machines are added to the network.

The key word there is that value is proportional to n². This means there is some constant factor K that needs to be estimated. As John Pfeffer points out, this factor may be very small, depending on use case and network conditions.

Furthermore, while the EoE model may have too many degrees of freedom, the Metcalfe model likely has too few. What exactly does K represent? We don’t have any real insight into the fundamental drivers of value beyond some vague notion that it should increase with the number of users. In other words, provided that Metcalfe’s Law is true, we know that the derivative of value with respect to number of users n is K*n, but nothing else!

There is additionally dispute as to whether this is even the correct formulation of Metcalfe’s Law! The Ethereum team, for example, claims that value is O(n * log n) rather than O(n²). There are economic arguments for both, but there are also arguments for alternatives like O(n^3/2).

At best (i.e. when properly formulated), Metcalfe’s Law can vaguely tell us the order at which the network value will grow with additional users. But it is a far cry from the real, fundamental, and estimable drivers given by the EoE model.

There have been some mixed results fitting cryptoasset return series to various formulations of Metcalfe’s Law. However, they commonly rely on an R² metric that gives little insight into any true causal relationship. Furthermore, given the available degrees of freedom (constant, power, log, etc.) and only loose grounding in any fundamentals that would inform plausible ranges for those variables, it is highly likely these models are overfit. They certainly offer little predictive value.

Store of value

Another model focuses on the store of value use case, which is commonly associated with Bitcoin. The model itself is quite simple: the potential valuation of a store of value network is some (high) fraction of the total value of gold held for store of value purposes plus some (probably lower) fraction of the total fiat held as international reserves. John Pfeffer articulates this particular argument well in his investment overview.

While interesting, this model is also of limited value. Successfully becoming a store of value is highly path dependent — there is a reason that gold is a common store of value and metals, such as platinum, that are just as good are not. There are critical factors in this path, like volatility and usefulness, that the model gives no insight into. It is also detached from any fundamentals; as Pfeffer says, a store of value’s value is subjective. Furthermore, it obviously only applies to networks that have a plausible store of value function, and is not helpful for valuing other use cases.

Comparables Analysis and Precedent Transactions

Comparable analysis involves comparing the asset of interest to similar assets for which you know the price and financial metrics. This is a common traditional valuation technique that could entail, for example, finding the average revenue, profit, or EBITDA multiple for the sector and applying it to the target company’s own financial metrics.

This can be a useful tool for cryptoasset valuation. A reasonable present valuation for a decentralized file storage network could be ascertained from the public valuations of Sia, MaidSafe, and Storj, for example. But it doesn’t truly reflect long-term fundamental value. It only demonstrates what others are willing to pay for similar assets rather than what they should be worth. Until networks gain some real traction and maturity, comparable analysis will be of limited value.

A related model is precedent transactions analysis, which compares the historical prices of acquisition transactions to estimate a valuation of a target asset. While the comparable analysis is based on what the marginal buyer will pay, precedent transactions looks into what the buyer for an entire asset or company pays. Due to the public, decentralized nature of cryptoassets, this type of analysis is unlikely to be very useful. But there has been some limited M&A in the space — you never know if future innovation could enable more!

Where we go from here

Our subsequent installments in our valuation series will delve into monetary velocity, the benefits of specialization, price volatility, and other topics.

In the meantime, we encourage you to download and play around with our EoE model. Stay tuned!