A 360 foundational explainer on crypto fees and the optimal fee model.

Background:

While much attention has been paid to the issue of capacity and scalability, arguably the larger hurdle to broad adoption of cryptocurrency is the steep transaction cost. Over the past 12 months, Bitcoin and Ethereum transaction fees have averaged more than $5 and $0.60, respectively, with peaks exceeding more than $20. As a result, a number of large merchants and payment service providers — Steam, Stripe, Microsoft — dropped Bitcoin payment integrations. While cryptocurrencies were originally envisioned as a leap forward in payments, they often cost substantially more than conventional payments rails. Along with the unacceptably slow speeds, this cost makes crypto a strictly inferior payment option for most use cases.

If these networks are not useful now due to cost, how can they possibly gain enough traction to make capacity matter?

LTM average transaction fees exceeded $5.00 and $0.60 for Bitcoin and Ethereum, respectively.

The economically astute reader will note that, in the traditional crypto model, capacity and cost are, in fact, one and the same. Bitcoin, Ethereum, and most legacy crypto networks have a dynamic, market-determined network fee. That is, users sending transactions choose the amount of fee that they wish to pay, and then, validators (miners, in the case of proof-of-work networks) choose which transactions they wish to include. Naturally, validators will tend to choose the transactions with the highest fees because that maximizes their profit.¹

This results in a classic supply-demand equilibrium, where the transaction fee is the market clearing rate where supply meets demand. There is a scarce resource — the space in the next block, or, more generally, the “slots” in the next set of transactions to be processed — and as demand for that resource increases, so does its price.

As this logic suggests, the increases in fees in Ethereum and Bitcoin occurred during periods of high transaction demand where capacity constraints were binding. Every big ICO or DApp launch (CryptoKitties being the most notable example) saw a commensurate spike in Ethereum transactions, and thus transaction fees, while the Bitcoin mempool spiked massively during high excitement periods. The result was highly congested networks where transactions with “normal” fees often waited days or weeks for their transactions to be processed.

Alarmingly, this rise in prices occurred at effectively zero adoption of either Bitcoin or Ethereum. The fact that these networks cost so much with such small volume highlights their severe technical limitations vis-à-vis both capacity and cost. The fact that other networks have not seen a similar spike in fees is a testament to their utter lack of adoption, beneath even that of Bitcoin or Ethereum.

The issue of fees has not gone unnoticed. A number of newer networks have proposed a variety of alternative fee structures, each claiming to “solve” the fee problem at capacity. Most tantalizing, several networks, including EOS, IOTA, and Nano, claim to have no fees by design. At first blush, this sounds great — if the protocol prohibits fees, then they can’t rise! Like many claims in crypto, unfortunately, “zero fees” turns out to be too good to be true.

There has been some sparse academic research into fees — see, for example, section 7.3 of this paper for a good review. Unfortunately, most research makes assumptions about conditions to optimize around, such as some concept of “fairness” or predictability, without justifying their objective functions. While such results may sound good at first blush, how can we know which outcome, among many plausibly good outcomes, is indeed the best? A more deductive and analytical approach is needed.

What then, is the answer to the existential problem of fees? And how can we go about thinking through the pros of cons of the various proposed fee models to figure out which one is optimal? Luckily, these questions are relatively straightforward to answer with basic economics.

Fee Economics 101

It turns out that crypto network costs are quite simple from a microeconomics standpoint.² There is a single, self-contained market for a uniform good — a transaction validation slot — with a set of effectively anonymous buyers and sellers. There are relatively few exogenous factors or externalities that need to be taken into account. While standard microeconomics models must be caveated heavily in normal markets, the simple, insulated structure of crypto fee markets results in a straightforward analysis.

The way economics analyzes this type of simple market is through supply and demand curves. I won’t review the basics, but I will note several interesting considerations for crypto fee markets. I’ll assume you know how supply and demand curves are used to find market prices and what consumer and producer surplus and net welfare mean — if you don’t, you should review the Wikipedia article.

The standard crypto fee market involves a set of producers (validators) supplying validation slots that are bid for by consumers (the users of the network).

Interestingly, the crypto market differs from a simple supply-demand market in that not everyone gets the good at the same price. If you submit a transaction with a fee of $1 and the market clearing rate is $0.50, you will still pay $1. This ends up having no impact on the actual market dynamics — it just shifts consumer surplus to producer surplus.

The primary wrinkle for the crypto fee market is that supply is typically fixed — Bitcoin, for example, has one block of roughly 2,000 transactions every 10 minutes, or about 3 transactions per second. In general, there are only so many transactions per second that a given network can process. Typically, crypto observers assume this means that supply is perfectly inelastic — it can never increase or decrease.³

Most crypto networks have perfectly inelastic supply of validation slots

However, supply is only perfectly inelastic after a certain point. That is, supply is capped, but the producers in the market won’t necessarily supply up to that cap for zero cost. Specifically, the supply curve is upward sloping until it hits the cap (variable costs of validation, typically quite small) and has an initial jump (fixed costs of validation, often quite large). In Bitcoin, for example, fixed validation cost is very high, so validators would not process transactions for free.

In reality, fixed costs do not impact the marginal production but rather the decision whether to produce (validate) at all. The below chart roughly (but, strictly speaking, inaccurately) accounts for the effect of fixed costs by shifting up the supply curve. It does accurately show the net social welfare, which is our chief concern. As we will see below, due to a variety of other factors, these fixed costs can be abstracted away into a pure decision of whether or not to validate.

Updated supply and demand curves reflecting approximate fixed and variable costs of validation

The longtime Bitcoin enthusiast will object to this statement — for many years, transactions with no fee were processed quite regularly in Bitcoin. There are a couple of factors that impact this simple supply curve and explain this discrepancy.

First, traditional cryptocurrencies also include an inflationary validator payment (the block reward). This has the effect of shifting the supply curve down, just as a government subsidy shifts down a market supply curve. If the inflation is large enough, the initial sloped section of the supply curve all falls below zero, meaning that validators will process zero-fee transactions (typically, networks will not allow negative transaction fees). As we will see later, however, inflation comes at a cost.

Second is the more nebulous concept of crypto enthusiasm and benevolence. Many people derive utility or some uneconomic benefit from participating in and supporting the network. This is seen in the people who run full nodes on Bitcoin and Ethereum, incurring the cost of running a node without any economic benefit.⁴ Sometimes, they can actually derive some direct economic benefit — for example, a blockchain explorer might run a full node and serve ads on the site, or a merchant might run a full node to accept crypto payments and save on transaction fees. Regardless, the “enthusiasm” factor has the same effect of the inflation subsidy of shifting the supply curve down.⁵ There is a limit though — once the cost exceeds their utility, these “benevolent” participants will stop running their nodes.

An underappreciated reality is that there is a real economic cost of running a node. Generally, this cost scales with transaction volume⁶ CPU, data storage, electricity, and bandwidth are costly goods, and you need more of all of them for each additional transaction you process each second. Due to this increasing cost, it is economically unsound to rely on the goodwill of participants. We’ll explore this concept more in a bit.

This variable cost manifests itself as an upward slope in the supply curve. In a well-designed crypto network, the slope of the supply curve until it hits the capacity limit should be quite shallow, reflecting the low marginal cost of validation. That is, as long as you are already set up to validate, the incremental cost of validating one more transaction is quite low. However, the aggregate cost of processing a large number of transactions is non-negligible.

There are a few offsetting effects that shift the supply curve up. These include the upfront costs of both staking and mining, such as buying network tokens or mining equipment or the time and dollar cost of securing your operation, preventing DDoS attacks, and so on. Validators will demand a certain minimum payment to supply their service, which is reflected in the fixed costs mentioned previously.

Nevertheless, the net effect of these factors is that the entirety of the sloping part of the supply curve is well below zero in a network like Bitcoin or Ethereum.

Net effect of inflation subsidy plus general enthusiasm shifts the supply curve down. Typically, this means we can use the basic perfectly inelastic model since the upward sloping part of the curve is all below zero.

All this applies to traditional crypto networks where there is a true fixed capacity. In a better-designed network where capacity is determined by validator hardware rather than by network limitations, there will actually be a somewhat upward sloping supply curve, since validators can upgrade their hardware if the fees rise commensurately. The upward sloping curve also can reflect the differing costs of the various validators — some may have cheaper access to electricity or hardware, for example.⁷

We’ll assume the fixed supply market in this article for simplicity. In general, though, the fixed supply is a conservative assumption in that it makes the arguments for the alternative (non-dynamic) fee models look better than under a variable supply assumption.⁸ That is, all the conclusions in this article will apply equally or more to a variable supply market.

Also, for simplicity, we will assume that transaction finality occurs concurrently with consensus. This is not the case with proof-of-work and other probabilistic consensus algorithms — in Bitcoin, for example, there is always a small chance that the transaction can be reversed, which declines exponentially with number of blocks since it was processed (at least probabilistically). There are several perverse dynamics in the fee market under a probabilistic consensus, such as those detailed in a recent BIS paper, that significantly complicate this analysis. Since these issues are primarily security rather than economic considerations, and more robust non-probabilistic consensus models offer strict improvements over the probabilistic ones, it is sufficient here to note that they do not materially impact the broader conclusions.

An interesting alternative approach is to model the fee market as an auction (albeit a non-standard one where there are repeated opportunities), but we will leave such analysis for a future post. The conclusions are substantially the same as the basic supply-demand model.

This simple fixed supply model provides a basis that now allows us to dig into the various models for fees.

Fee models:

There are four main basic fee models:

No fees Fixed fees Inflation fees Dynamic market fees

All other fee models boil down to either one or a combination of the four basic models.

From an economics standpoint, it is easiest to think of the dynamic market fees as the “base case” that matches the standard supply-demand analysis outlined above. The other three cases correspond to various shifts or changes in the supply and demand curves, with implications for overall efficiency and global utility or welfare.

We’ll start with the no-fee model, which will also serve to illustrate as the most extreme example of several problems that affect both the fixed and inflation fee models.

Zero

The fee model that has made the biggest splash recently is the zero-fee model, with IOTA, Nano, and others claiming to have no fees. The supposed value proposition is, of course, compelling: if there are no fees, then anyone can use the network, no matter the transaction size, and never have to worry about fees. Unfortunately, there are several flaws with the no-fee model that make it unviable in a long-term, healthy network.

The True Cost of Transacting

First, the true cost of transacting in a healthy network can never be zero. At a minimum, there is some computational (and thus electric) cost to sending out a transaction.

Of course, if a transaction only involves a cryptographic signature, then this computational cost is virtually zero. However, any pseudo-anonymous network with genuinely zero fees will inevitably be spammed until it becomes unusable (any zero-cost long-term mitigations require some permissioning that breaks the fundamental value proposition of a cryptocurrency). Thus, the network requires some economic spam-prevention measure. Commonly, this is in the form of a proof-of-work proof, but this is economically equivalent to a direct fee since computing the proof-of-work has a non-trivial cost (otherwise, it would be useless at preventing spam).

While dynamic fees are represented in the demand curve itself, these hidden fees can be represented by a downward shift in the demand curve since they decrease the net utility a consumer gets from sending a transaction. Regardless, from a consumer’s perspective, they can be viewed as equivalent costs to transacting, and all popular zero-fee networks are just direct fee networks with a different name. There is a significant difference from the validator’s perspective, however, since they are not receiving any compensation.

Tragedy of the Commons and Free Riders

The obvious question that arises in a network with no validator reward is why anyone will validate on the network if they do not receive any payment?

IOTA answers this by forcing end users to validate other transactions in their own transaction, which effectively ends up being an additional hidden fee. More importantly, the resulting consensus model is insecure,⁹ so it is not a valid example of a viable decentralized network fee model.

Other networks, notably Nano, claim that “Average Joe” network participants will run full validator nodes since they use the network and would like to maintain it, rather than simply relying on light wallets and third-party nodes. While this may seem plausible on the surface, this argument has long been refuted by the Tragedy of the Commons result.

The Tragedy of the Commons is a simple economic phenomenon that illustrates the issues with common ownership of an asset or good that needs to be maintained. In its classic formulation, a town has a common pasture area where its residents can let their animals graze. This is a good outcome for everyone involved as it is much more efficient than everyone keeping their own grazing grounds. However, the individual townspeople have little incentive to contribute to the upkeep of the commons, resulting in free riders. These people observe that they can pay nothing to maintain the commons (and treat it more roughly than they would their own property) and still enjoy the benefits of the common resource.

Each person who decides to free ride increases the cost of upkeep for everyone who does pay since the same fixed upkeep cost must be met, resulting in a positive feedback loop. Of course, the logical end result is virtually no one pays for the upkeep, (anyone who does decide to pay is effectively left “holding the bag”) and the commons becomes unusable for everyone. Like in the related Prisoner’s Dilemma, the game theoretically dominant individual strategy of not paying leads to a worse outcome for everyone involved.

More generally, given economically rational actors and the absence of an enforcement mechanism, any common, shared resource will logically be abused by free riders. The key lesson of the Tragedy of the Commons is that altruism or enthusiasm alone is not enough to maintain a common good.

This is an empirically proven phenomenon that everyone should be able to identify in their own lives; it comes as a shock to no one that people abuse and exploit common resources like parks or public transportation more than their own goods.

There are a few mitigations to such an outcome. Often, a tax is levied on the entire community to pay for upkeep. Alternatively, a toll or fee is charged for use of the resource. Both of these approaches are inherently contrary to a zero-fee network. Social pressures play an important role in preventing the worst abuses, as free riders can be labeled as such and will have to deal with the consequent stigma, but this too is contrary to a pseudo-anonymous, open network.

As an open, trustless, peer-to-peer system, a crypto network can be viewed as a form of a commons. There is a certain amount of work (transaction validation) that needs to be done to maintain a network and make it usable to the benefit of all. The fewer people who do this work, the worse the network is for everyone — transactions may be slower, security would be lower, and reliability would be worse — until it is useless for everyone.

Unfortunately, as mentioned previously, this work is expensive, with both fixed (acquiring network stake, setting up DDoS protection, ensuring sufficient reliability) and variable (data, bandwidth, CPU, electricity, etc.) costs. A network that relies on people doing this work out of genuine altruism or even based on the realization that everyone is better off if the resource is maintained will inevitably have a Tragedy of the Commons result, as economically rational actors will free ride off the other participants. In a pseudo-anonymous crypto network with a non-trivial number of transactions (and thus a non-trivial cost of validation or “upkeep”), this result is economically inevitable.

There are some trivial cases where a Tragedy of the Commons may not be the end outcome, particularly when the network is immature, as there are users who generally derive utility from participating in the network, even for free. A network probably is able to rely on enthusiasts if it only processes a few transactions per second, but once it starts processing thousands of transactions per second, the large cost of running a full node will likely soon outweigh the “soft” benefit those users get from running the node. How many people are willing to pay hundreds of dollars per month to run a node when they can free ride off others?

In existing crypto networks, the vast majority of users use light wallets that rely on other nodes. In a network with a well-designed consensus, this does not increase their risk of being a victim of a double spend or other attacks,¹⁰ so free riding is a dominant strategy.

Nano is the most prominent proponent of this “validation by desire to participate” strategy, but it ironically serves as an early example of the free rider phenomenon. Virtually no true third parties run network validators, with almost all voting power held by the development team or closely affiliated light wallets, applications, and exchanges. The largest validator on the network, Binance, often does not even bother voting in consensus, which has led to some practical issues and concerns.

As with all aspects of network design, the focus should not be on what works now but what will work at maturity. The zero fee approach boils down to relying on user’s altruism or enthusiasm, which is not a sustainable growth model.

Validator Complexity and Security

Even if validators participated solely out of altruism, it is doubtful that “Average Joe” merchants and users would be able to maintain a robust network. A mature network processing thousands of TPS will require substantial expertise in building and maintaining a node, even in AWS or some other cloud service. Only so much can be optimized and streamlined, as threat models evolve, security demands following best practices, and networking itself is non-trivial. A regular company will only devote so many technical resources and unproductive working capital (in the case of a staking network) to run a secure, robust node on the network before giving up and just using a third party instead.

The fact that many networks do not have the basic protections that result in this complexity is not a signal that they are unneeded but rather a reflection that those networks are poorly designed, not significant enough, or not sufficiently decentralized to bother attacking. A poorly set up node that is subject to DDoS attacks and frequent downtime does not contribute to the network and may, in fact, decrease its efficiency.

Deadweight Loss in a Perfectly Inelastic Market

There is yet another issue at play: deadweight loss. Mandating zero fees is effectively equivalent to imposing a price ceiling of zero on the market.

In a normal supply-demand market, price ceilings result in deadweight loss, which means total welfare (consumer surplus plus producer surplus) decreases. In other words, the economy overall is worse off. The same basic analysis has been demonstrated in rent control, which results in fewer housing units overall and decreased mobility. The end result benefits the few people who get rent controlled apartments but hurts the rest of the population. Unlike rent control, however, where there are several complicating factors, the transaction market is extremely simple and self-contained, meaning that the deadweight loss analysis is straightforward.

The recent Econ 101 student will object by pointing out that the transaction market differs from normal markets due to having a perfectly inelastic (fixed) supply (by our previously stated assumption), which means that there would be no deadweight loss — only a transfer of surplus from producer to consumer.

In a classic economic model, there is no deadweight loss due to a price ceiling in a market with perfectly inelastic supply. However, this fails to recognize the “lottery effect,” where those that value the good the most are not necessarily the ones that receive the good. Source.

Once the “lottery effect” is taken into account, there are many consumers who receive the good who value it lower than the consumers without the price ceiling. This results in deadweight loss, which has practical usability issues in the zero-fee model. Source.

However, this common economics result has a key flaw — it assumes that the consumers that get the good are those that value it most. In reality, price ceiling markets typically act as a lottery amongst all consumers whose reservation price exceeds the price ceiling. The result is that the consumers that actually receive the good have, as a whole, lower aggregate consumer surplus than the consumers that receive the good in a free market. Thus, there is a true, overall deadweight loss even if supply is fixed. This insight is reflected in the more sophisticated Marshallian approach.¹¹

By way of example, imagine that the network can process 10 transactions per second, but there is demand for 100 transactions per second. In a zero-fee network, none of the demand will be priced out by a fee, so all 100 transactions will be competing for 10 slots each second. Some of these transactions may be very important and thus highly valued, such as paying a highway toll (a long delay could cause a huge traffic jam). Other transactions will have barely any value at all to the participants, such as a $0.0001 transfer. And yet, under the zero-fee system, both transactions will be equally valued and likely to get one of the slots. If all of the slots are taken by low utility transactions and many high utility transactions are excluded or delayed, then the network will fail to maximize overall utility. And since any distribution of transactions is almost certainly to have a large right tail, there are many more low utility transactions than high utility transactions, all equally likely to get a spot.

This deadweight loss has real, practical implications — participants need a network that will reliably prioritize their most important transactions, and one that cannot reliably process your transaction due to this “lottery effect” will not be very useful. For example, a merchant whose transaction may or may not be processed and who has no discretion to increase the fee to prioritize the transaction may be left pending indefinitely — hardly an acceptable outcome!

Note that high utility/value should not be confused with high amount. It is very possible, for example, for a $1,000 transaction to only result in a $1 economic surplus between the participants, as is very common in low-margin businesses. Conversely, a $10 transaction might result in $9 of surplus. Any prioritization rule, such as give preference to larger transactions, is, at best, a first-order approximation with non-trivial loss.

In the best case, the system will result in an obfuscated, back-room fee market where validators are paid under the table for prioritizing specific users’ transactions. While this would prevent the “lottery effect,” it is a strictly worse version of the transparent, dynamic fee market and subject to abuse.

This is the essence of deadweight loss — a few lucky people benefit while everyone else has a suboptimal outcome.

As a result of these issues, the zero-fee model is critically flawed and cannot support a viable, long-term crypto network.

Inflation subsidy

A couple of the issues introduced by zero-fees boil down to the validators having no incentive to run and maintain the network. Several networks, most notably EOS, have tried to maintain the benefits of zero-fees to consumers while paying the validators with newly-minted, inflationary tokens. Superficially, this seems to enable transactions of any size and provide a legitimate incentive to validators.

While it does help solve the free rider and Tragedy of the Commons issues, the inflation model too has several issues that make it unattractive for a sustainable network.

The first issue again is one of deadweight loss and the “lottery effect.” Since the entire cost of transacting is covered by the inflation subsidy, there is no ability for the market to differentiate between high and low utility transactions. Just as in the zero-fee model, this results in low utility transactions being given equal weight as high utility transactions in competing for limited spots that are lotteried off by the validators.

This “lottery effect” severely hampers the overall usefulness of the network at maturity. In the absence of any filter, extreme low-value transactions will dominate and take most of the available slots, leaving legitimate, higher value transactions waiting potentially indefinitely. As seen in Bitcoin and Ethereum over various points in 2017 when mempools ballooned, this effectively renders the network unusable to average users. Although they allow dynamic fees, their capacity is so constrained that these periods were equivalent to a lottery between high fee transactions.

The deadweight loss imposed by the inflation model can be viewed alternatively as subsidy deadweight loss. While our idealized fee market is different due to the fixed (perfectly inelastic) supply, the implication is the same: the market is being externally subsidized, and those funds have to come from somewhere.

Deadweight loss from a subsidy in a standard market (without perfectly inelastic supply). A subsidy costs the ecosystem as a whole more than it receives as benefit. Source

In a decentralized crypto network, the subsidy is not coming from the government but rather inflation in the token supply. This inflation subsidy is like a government subsidy in that the cost is ultimately borne by “society” as a whole — the holders of the tokens. Each additional token created erodes the value of the existing tokens since there is no change in total network value.

Thus, the inflation subsidy can be viewed as a proportional tax on existing token holders that is redistributed to the validators on the network. If a network is worth $1bn in aggregate, for example, a 10% inflation is equivalent to a 9.1% tax on token holders that distributes $91mm to validators.¹² This tax is independent of the actual utility the user derived from the network. In fact, a user who sends comparatively few but high-value transactions may even derive less overall utility from the network than a user with many low value (think a fraction of a cent) transactions who is able to grab the vast majority of available transaction slots. This perverse dynamic discourages users from holding tokens, with significant implications for network security and growth.

It is generally considered far preferable to directly charge individuals in proportion to their use of a shared, limited resource than subsidize via broad base taxation. An analogous scenario is road usage, where funding via tolls (whether direct or indirect via fuel taxes) is far more efficient than funding via general taxes.

So while the inflation-only model fixes several of the issues with the zero-fee model, it still results in a suboptimal outcome that severely impairs network usefulness at maturity.

Fixed

The third major alternative fee model that new networks have adopted is fixed fees, where every transaction pays the same fee that is set by the protocol. This has notably been implemented by Stellar, in which each network version fixes the fee and upgrades are required to modify that fee.

The reasoning behind fixed fees is admirable but ultimately flawed. At the surface, fixed fees appear to be more equitable than dynamic fees, which ostensibly allow a wealthier person to pay more fees to get their transaction confirmed faster. Unfortunately, this logic does not quite hold, since it makes all transactions rather than all users equitable. For example, a wealthy user who wants to send many small, low utility transactions is advantaged versus a poorer user who has a few but relatively higher utility transactions. Instead, fixed fees result in a worse outcome for the network users as a whole compared to dynamic fees.

Under the economic framework, fixed fees can be viewed as a “top down” management approach that results in deadweight loss relative to the “bottoms up”/”invisible hand” of the dynamic fee model.

There are three possible outcomes for the fee charged under the fixed fee model, all of which are strictly worse than the dynamic fee model.

First, the fee may be higher than the market clearing rate. This corresponds to a price floor, which results in a deadweight loss. Practically speaking, it means that users of the network are paying too much for the service provided, to the benefit of the validators. Many socially valuable transactions that could use the network at the dynamic market rate are unable to use the network under the higher fixed rate, reducing overall social welfare. This is analogous to the purported impacts of minimum wage, where the people who hold on to their jobs benefit but a proportionally greater number of people lose their jobs, resulting in an overall negative outcome. While the minimum wage case is controversial due to confounding factors and complexity, the fee market, as we have observed previously, is extremely simple and self-contained, and thus, the conclusions are straightforward.

Alternatively, the fee may be lower than the market clearing rate. This corresponds to a price ceiling, also with deadweight loss. The zero-fee model is merely the most extreme version of this price ceiling, and the same issues carry through with a non-zero price ceiling. If the fixed fee is below the cost of validation, the network will likely fail or become significantly impaired due to free riders causing a Tragedy of the Commons.¹³ Regardless of the level of fixed price, treating all transactions equally results in an adverse “lottery effect” that inefficiently allocates the scarce, valuable resource of a transaction space. As with both the zero-fee and inflation-only models, this effect at its most extreme can render the network unusable for many high-value users like merchants.

The third possible outcome is that the fixed fee happens to exactly equal the market clearing rate. In this case, the result is identical to the dynamic fee market. However, it is objectively worse since exact equality is a degenerate case that is unlikely to hold for long. Demand for transaction space naturally varies over time. One might expect, for example, higher demand during the normal 9 to 5 business hours than on a Sunday at 5am. As demand shifts, the dynamic fee that optimizes overall social welfare will shift, while the fixed fee, by definition, remains constant. At best, the “top down” upgrade system would dynamically shift the fixed fee to match the market clearing rate, but this of course just results in a more centralized version of dynamic fees.

There are a few points that supporters of fixed fees might raise to push back on this logic. One potential benefit is that everyone pays the same rate, which may be more equitable provided that the fixed fee is close to the dynamic fee. However, this does not improve overall social welfare — it merely shifts surplus from producers to consumers. Since users will not pay higher fees than the utility they derive from their transactions, variations in transaction fees do not reflect any true economic inequality in the system. Confusion on this point often arises over conflating transactions with users. A quick counterexample is a poor user who wishes to pay many bills on the network versus a rich user who only needs to send a single network transaction. In this case, the poor user ends up unfairly paying more in the fixed fee market than the rich user.

Incidentally, there are some cognitive costs of forcing everyone to choose their own fee. It is simple, however, to instead have everyone choose their maximum fee, and then set the fee for everyone to the minimum fee paid in the batch of transactions, which is the market-clearing fee. This largely eliminates the game played by users, and thus the cognitive costs, when selecting fees.¹⁴

Second, fixed fees could help avoid short-term spikes in fees that have been observed in both Bitcoin and Ethereum. While this is true by definition, this is actually not a desirable outcome. The spikes in fees represent genuine spikes in demand for high-value transactions. Why should a substantially lower value transaction be artificially advantaged (relative to the dynamic fee outcome)? Forcing the fees to be consistent and smooth does not eliminate these swings in demand — it merely makes it so transaction confirmation behavior is a lot less certain and well behaved. Opaqueness is a far cry from equity.

The third and most plausible, but still ultimately unconvincing, argument for fixed fees is it simplifies the user experience and eliminates the possibility of “fat finger” fees. This phenomenon appears to be relatively rare even in the remarkably user-unfriendly world of crypto and is much better solved at the user application level rather than at the protocol level. Simple checks like limits on fees, suggested fees, and additional abstraction of the user away from the underlying protocol can eliminate the vast majority of these occurrences. Levying the requirement on the base protocol to make up for poorly designed user experiences at the expense of network efficiency is a poor development approach.

Dynamic

The best basic fee model is dynamic fees. As stated in the model definition, allowing users to set their own fee and letting validators select the transactions with the highest fees for processing results in an efficient market for fees. This market incurs no deadweight loss and maximizes overall social utility and network efficiency.

The dynamic fee market automatically adjusts to changes in capacity and demand and efficiently allocates space to the highest utility transactions. This is a far better outcome than the effective lottery imposed by the other models, where many low-value transactions will take space from higher-value transactions.

The merits of this model have been touched on in the discussion of the other fee models, but there are a few additional features that are worth mentioning.

Dynamic user defined fees allow users to effectively place “limit orders,” where they can submit a low value transaction with a commensurately low fee. If the current market clearing fee exceeds this set fee, the transaction can wait until a period of less demand, when it will automatically be processed.

Most of the benefits of dynamic fees are observable at capacity. Before capacity is reached, the equilibrium fee level should be less than or equal to the fixed fees or implied inflationary fees (assuming that the latter two fees exceed the cost of validation; otherwise the network is not sustainable). This means that the user of the dynamic fee network is never worse off than users of other networks in the long run.

To the extent that the network capacity can vary with validator hardware, the rise in fees when capacity is reached will encourage existing validators to upgrade their hardware and better validators to join the network. The result is a higher performance network that benefits all users.

A common critique of non-zero direct fee models is that they complicate token transactions. For example, a stablecoin token user on Ethereum must buy Ether in addition to the stablecoin to pay the gas fees. Once again, this is a user experience rather than protocol issue, and making a quick fix at the protocol layer that erodes its efficiency and overall usefulness is the wrong approach. As user experience continues to improve, many of these issues can be easily abstracted away. Few users have difficulties navigating the use of credits in mobile games or online services, and network tokens held to pay fees can be viewed much the same way.

Ultimately, increasing capacity, rather than fiddling with fee models, is the best approach to making fees equitable and accessible. Of the four basic fee models, only dynamic fees do not have any serious long-term drawbacks.

When is inflation good?

While the dynamic fees model is the clear winner of the basic fee models, it is possible that some combination of the fee models results in a better overall outcome. As we saw in the discussion of the various models, the effects of each can be easily incorporated into the microeconomic analysis. We can boil down each of the alternative models into some form of price control, subsidy, or tax, all of which result in deadweight loss in the self-contained fee market. This would seem to suggest that there is no place for inflation in an economically rational design.

However, there are some additional macro considerations that impact this analysis and suggest that there is a place for inflation in conjunction with a direct fee.

Inflation can help bootstrap an immature network to broader adoption in several ways. First, it subsidizes early adopters at the expense of existing token holders. This is equivalent to the founders and early investors in the network providing an incentive to early adopters, which is a very common business strategy.

Second, inflation that exceeds the cost of validation can enable early users to transact with zero fees for themselves, which can streamline the user experience. While it is a poor long-term strategy to bend the protocol to deal with UX issues, the reality is that the state of crypto infrastructure remains quite poor and justifies accommodating initial users to get the network off the ground.

Third, inflation can help distribute tokens to early adopters who are meaningfully contributing to the network. By providing a compelling value proposition at an early stage, the network can attract strong validators that increase network decentralization and robustness. Note that all of these benefits are short-term; at maturity, inflation for the sake of users is harder to justify.

Nevertheless, there are compelling reasons to keep some inflation in the network long-term: macroeconomic stability. The consensus in central bank monetary policy is that optimal inflation in an economy is around 2% per annum. While antithetical to the early crypto ethos that is highly critical of inflationary policy, a low amount of inflation has clear benefits to an economic system by reducing frictions and promoting overall growth.

The dangers of the alternative — deflation due to a growing economy with constant monetary supply, like Bitcoin — are incontrovertible. By promoting hoarding of money, deflation introduces frictions into the economy, which can result in a severely adverse positive feedback loop called a deflationary spiral.

Before the introduction of modern central bank policy, deflation was at the center of many of the most severe economic crashes. By maintaining some modest inflation, a crypto network can grease the wheels of its economy and mitigate several existential risks.

Finally, there is a plausible reason to broadly tax the token holders on a network via inflation. All users of the network derive a genuine benefit from the network’s existence, whether they are currently using it or not. The very option of being able to use an efficient, cheap, secure, and decentralized network going forward is a positive externality — for example, as a hedge against potential censorship or infrastructure to build a future business. While it is imperfect in its impact, an inflation tax does accurately represent that not only the direct users of the network benefit from its existence.

The Logos Approach

So what, then, is the optimal fee policy?

At Logos, we have concluded that the best approach is dynamic, user-paid fees plus modest inflation. This combines the lessons of both the microeconomic and macroeconomic models to promote sustainable growth across Logos’ life cycle.

By permitting users to choose their own fees and validators to prioritize transactions accordingly, we can maximize overall network effectiveness and usefulness. The dynamic fee model yields the best microeconomic outcome by responding instantly to changes in demand and encouraging sustainable network growth.

The inflation component helps jumpstart the initial growth of the network in the short term. In the long term, a reasonable level of inflation keeps the network economy running smoothly and avoids major macroeconomic risks. By design, Logos concentrates the most expensive validation tasks in the hands of a relatively small number of delegates (while ensuring that these delegates are strictly accountable to the rest of the network, and, thus, that the network has a high level of security). As a result, a small amount of inflation can be a very large reward for the delegates, ensuring healthy competition that maximizes network performance. Such a level of inflation imparts very little deadweight loss on the overall network (provided that there is a dynamic fee to eliminate the “lottery effect”), making this a good compromise.

Logos’ biggest fee design is indirect: maximizing capacity. By providing optimal infrastructure for payments that pushes capacity to the limits of hardware and beyond, Logos ensures that direct fees will remain close to the marginal cost of validation, even at global levels of adoption.

Conclusion

Ultimately, there is no single right economic fee model, and even top economists will disagree over what is the proper level of inflation. Economic models are inherently simplifying and can bely much of the complexity present in a market. While simple in most respects, crypto has several unique properties that confuse this analysis in the short term, particularly the emotionally charged ideologies that are inseparable from the users of many networks.

Nevertheless, this does not invalidate the model conclusions. By applying simple principles that are logically true at a mature, rational economic equilibrium, we are able to see that the biggest buzz fee models — zero fees, fixed fees, and inflation only — have severe limitations that will significantly impair networks that use them. While alternative models may promote short-term growth, it is critical for projects and communities to be realistic about what will work in the long term if they wish to succeed.