Other languages: 简体中文

Disclosure: I own BTC, ETH, BCH, RDN and other coins, but my portfolio is heavily diversified, so I don’t have financial incentives to shill for any particular coin. I’ve been contributing to Raiden and Celer projects, and I hope to see many more projects with different scaling solutions that will compete with each other, while users will have a freedom to use a payment solution of their choice. If you belong to a different camp, that’s fine, I have nothing personal against you and we can still collaborate. This article is brought to you by a privacy-oriented peer-to-peer marketplace LocalCryptos.

Intro

Watchtowers will have a tremendous impact on users of payment channels, so two previous articles of this series spiked great discussions in the community, because it’s important to research incentives models and different designs while the tech is in early development stage. Unfortunately, there is a lack of research papers on this topic, and there are widely accepted misconceptions such as “we shouldn’t worry about UX and privacy at the start, because they can be improved over time”, which we will debunk in this article.

I’d suggest to read the first part of this series in order to understand why watchtowers are essential for scaling and their main challenges. Additionally, in the second article you can learn about different solutions to privacy and scalability issues, and their trade-offs. And finally, now it’s the time to discuss viable business models and accountability in order to understand if payment channels can scale without becoming a tool of global financial surveillance.

Note that off-chain protocols have different names for watchtowers such as Raiden monitoring services, Celer’s State Guardian Network, or PISA. However, in this article we will refer to all implementations as “watchtowers”, unless specified otherwise (e.g., LN watchtowers, RDN monitoring services, etc.)

We will also use terms:

business-oriented to refer to watchtowers that can link together all state updates made by the same channel or account privacy-oriented to refer to watchtowers that cannot link together state updates made by the same channel or account

For more details about these models, see the first article.

Side note: some people have asked why the wording is “business VS privacy”. Well, businesses try to increase their revenues and survive the competition. Watchtowers that don’t care about privacy have more ways to increase their incomes and decrease expenses, which makes them more viable.

Accountability

Each user wants to be sure that a hired watchtower will protect him 24/7 and it will be on his side during any dispute, so let’s start our journey by quickly outlining main approaches to improve accountability and their drawbacks.

For better understanding of main differences, let’s make a rough rating of key characteristics of each model.

Keep in mind that “High” security doesn’t mean that user’s funds are 100% safe, because there still be different vulnerabilities that can be exploited via e.g. collusion or hacker attacks.

It’s important to mention that these accountability approaches can be combined together and they will be used with different business models (described in the second part of this article), so the final outcome is unclear. We will think about the perfect solution for a watchtower problem in the last article of this series (coming soon).

Now let’s discuss different accountability models in more details.

1. A reward for a successful dispute of a malicious settlement

This approach is important to insure that a watchtower will be on user’s side in most disputes, unless it colludes with a malicious actor (e.g. a big hub) in order to get a higher reward for not stopping a fraudulent settlement.

UX of this model is high as well, because a user won’t need to perform any additional actions, apart from sending his state updates to watchtowers.

However, this approach will create a conflict of interest, because watchtowers will financially benefit if there will be many cheating attempts or unintentional channel closures with invalid states.

2. Reputation system

Although implementing a reputation system is a popular approach to reduce risks on peer-to-peer platforms, there are many drawbacks:

A trust-based system is opposite to trust-less values of cryptocurrencies.

A reputation system won’t prevent mass-exit-scam scenarios unlike on p2p exchange platforms. For example, if a trader on a P2P OTC trading platform decides to go rogue, his reputation will drop fast and thus he won’t be able to cheat many users; while a malicious watchtower will be able to cheat all of its clients at once by stopping providing its services, even though the services were prepaid in advance. Additionally, an adversary watchtower can collude with malicious hubs or other 3rd party entities to maximize its gains during a massive exit-scam.

It’s unclear how to make sure that a watchtower’s reputation will decrease if it failed to perform its duties. If a user loses his device with all the latest state updates, then it will be almost impossible for a user to prove that a watchtower has failed to protect him, especially since laymen are not tech-savvy and often lazy to pro-actively report such events. So how can users trust that a watchtower’s reputation score is accurate and that it did prevent all malicious settlements in the past?

A reputation system has poor UX, because a user has to take additional steps during on-boarding process in order to find a watchtower that suits his requirements, using some aggregator or so-called “marketplace”. Unfortunately, such marketplace will add even more complexity to already complex system and thus greatly decrease adoption beyond geeks.

What if (1) a watchtower decided to exit the business without cheating anybody or (2) a watchtower’s reputation has seriously decreased, who and how will notify a user about that? Should a user again trust some 3rd party to notify him in case he needs to perform additional actions?

What if a watchtower decided to exit the business, but continues to receive new state updates and payments without actually storing state updates or monitoring the blockchain, how much time will pass until users will find out that a watchtower is not protecting them anymore?

A system is vulnerable to Sybil attacks (fake clients, reviews, etc.)

A reputation system increases centralization, because laymen often choose the most trusted vendor.

3. Security deposit to financially penalize malicious watchtowers

A watchtower can stake some coins/tokens, which it risks to lose if it will fail to dispute a settlement when a malicious counterparty appears.

There are two different models: either a watchtower is randomly chosen by an algorithm from a pool of watchtowers, depending on its staking size, or each user chooses his own watchtower directly.

3.1. A pool of watchtowers

A watchtower is randomly picked by an algorithm from a pool of watchtowers, depending on its stake size. The more coins one stakes, the statistically more likely he gets assigned to watch user’s states and earn fees. The key idea is that each state update is being distributed among multiple watchtowers (e.g. 3–5) and if the first watchtower fails to fulfill its duties in time, other watchtowers will be able to take its deposit.

This solution is utilized by Celer Network, and has great advantages such as strong collusion resistance, but there are certain drawbacks and concerns:

Side note: keep in mind that the concerns below are about an accountability solution (a pool of watchtowers), not about Celer in general. However, some concerns below are valid for Celer’s SGN as well.

Each state update will be guarded by multiple watchtowers, which improves overall security, but as a trade-off it increases operating costs of the network, overall off-chain transactions fees, and decreases privacy.

Staked coins are exempt from the circulation, decreasing overall liquidity, because in order to achieve high security, watchtower’s deposit size should ideally match the maximum possible loss of user’s funds.

If a system won’t require watchtower’s deposit size to match the maximum possible user’s losses, then during low competition periods the average deposit will be too low to prevent mass-exit-scam scenarios via collusion with multiple entities (e.g., watchtowers and hubs). On the other hand, in case of high competition the entry barrier will be too high, which typically leads to businesses centralization.

If a watchtower loses the whole deposit in case of a failure to stop any malicious settlement, then it will be very risky to stake a big deposit. To avoid such risks, a watchtower will try to split its deposit into multiple peaces, acting as different independent watchtowers. That will give one entity higher chances to receive all state updates from the same transaction, which will increase risks of cheating attempts via e.g. collusion with malicious hubs.

If a watchtower with a large deposit will get disproportionally high chances to be appointed as a watchtower (e.g., 20% of network deposit will give 30% to be appointed), then that will increase centralization.

The system is sensitive to price swings (e.g., during bear markets people are less incentivized to become watchtowers, because they have to stake coins that are constantly losing value; while during bull-runs watchtowers will be financially incentivized to start unstaking period to sell their deposits).

If a user lost his device with all the latest state updates and can’t recover cloud backups, then he won’t be able to submit a proof that a watchtower failed to dispute a malicious settlement in order to get a compensation from a security deposit. There are some workarounds though, e.g. Celer Network is planning to solve that with SGN full nodes, which are basically watchtowers that store all the latest state updates for all channels, and they can help a user to punish malicious watchtowers in exchange for a fee. However, there are concerns about the implementation, privacy and viability of full nodes model, because full nodes will have high operating costs that have to be covered.

A user cannot prove a malicious behavior and get a compensation if he comes online after malicious watchtowers have unstaked their deposits. Of course, unstaking period should be very long, but a user can get into jail, coma, etc.

Mo Dong, co-founder of Celer Network, is more optimistic about these issues, you can read his responses to most issues above in this Twitter thread (there are many branches).

3.2. Individual watchtowers

Another approach is that each user will choose a certain watchtower, which has locked up large funds in the smart contract as a “security deposit”. This approach is used by PISA, but it also has certain drawbacks and concerns:

Large collateral requirements for watchtowers will lead to centralization.

There are two different designs. (1) Suppose a watchtower loses just a portion of its deposit in case of a failure to defeat an attack, then that won’t prevent massive exit-scam scenarios, when a big watchtower is colluding with a big hub to cheat many users at once.

(2) In another design (the original PISA’s approach), a watchtower losses all its stake (e.g., $50k) in case of a failure to defeat even a small attack (e.g., $10). On the one hand, this decreases risks of collusion, which is good, but on the other hand, this increases business risks of operating a watchtower, since a network outage or other technical problems can lead to a loss of all the stake. And it’s fair to say that high business risks lead to either high fees, or low competition, and hence higher centralization.

During a long bear market people will be less incentivized to stake large deposits in order to become a watchtower, which leads to lower competition and greater centralization.

Poor UX, because a user has to find a watchtower in some database that contains id/name, a security deposit value, a deposit period (withdrawal date if any), monitoring fees, and then make a conscious decision which watchtower to choose. Additionally, guarding fees can change anytime, or a watchtower can decide to unstake its deposit, so user will be forced to find another watchtower.

Poor UX will probably increase centralization even further, since laymen tend to use the most popular vendor.

This model also shares certain drawbacks with a previous model such as large deposits decrease overall liquidity, the system is less stable during price swings, and a user can’t punish a watchtower that has already unstaked its deposit.

Patrick McCorry, creator of PISA, disagrees with certain statements above and argues that UX barely exists in any off-chain protocol; you can read his point of view in this Twitter thread (it has many branches).

Now, when we have a basic understanding of different approaches to solve accountability, let’s finally move to the most anticipated part of this series.