How Lightning network maintains Bitcoins value proposition

In part 1 of my toxic bitcoin series we will explore flawed narratives of “on-chain scaling” proponents and lightning network critics and why their fallacies are not only damaging but considered a direct attack on bitcoin. It will centre around one individual. Bitcoin unlimited chief scientist: Peter Rizun. This is for simplicity, to avoid conflating with the subject of past failed political attacks and his recent re-emergence on the subject. This goes far beyond himself and there are many others who have perpetuated these damaging ideas.

A brief history and explanation of bitcoin and lightning network: one of the first people to really get excited about the bitcoin whitepaper after the pseudonymous Satoshi Nakamoto’s release was Hal Finney, but very early on came to the realization that it does not scale without severe consequences to the value proposition of bitcoin and damaging to censorship resistance and decentralisation. Many of Satoshi’s assumptions were very naive and in many cases proven wrong although these ideas evolved and very early on they both explained the structures of off chain scaling solutions and payment channels. Satoshi even describes a very basic concept of hash time lock contracts and even a form similar to how atomic swaps are achieved while talking about bitDNS (became the first fork of bitcoin known as Namecoin).

“If you’re still worried about it, it’s cryptographically possible to make a risk free trade. The two parties would set up transactions on both sides such that when they both sign the transactions, the second signer’s signature triggers the release of both. The second signer can’t release one without releasing the other.” — Satoshi Nakamoto (comment in regard to trading bitcoin for other non-repudiable commodities)

These ideas make up the basic structure of what is now lightning network. “Satoshis vision” as the bigblockers would put it. However these concepts similar to payment channels and HTLC’s described are not as utilised in lightning today and would not have been secure from attacks from miners or even unexpected propagation delays. Sequence number transactions were even possible in the original code base through unconfirmed confirmation time-locked transactions, however there was no way to enforce them when the update mechanisms finalised or could be double spent. here is satoshi in an email to a bitcoin developer:

“One use of nLockTime is high frequency trades between a set of parties. They can keep updating a tx by unanimous agreement. The party giving money would be the first to sign the next version. If one party stops agreeing to changes, then the last state will be recorded at nLockTime. If desired, a default transaction can be prepared after each version so n-1 parties can push an unresponsive party out. Intermediate transactions do not need to be broadcast. Only the final outcome gets recorded by the network. Just before nLockTime, the parties and a few witness nodes broadcast the highest sequence tx they saw.” — Satoshi Nakamoto

While the implementation of segwit (which was deployed via Bip91 and forced via user activated softfork Bip148 with fullnodes signalling support) solved the problem of malleability attacks while also giving us a reactive blocksize with the possibility up to 4mb even (and here is why this figure is important). However all previous ideas of payment channels were based around timelocks which are not very practical as they expired quickly so doesn’t scale beyond a few peers. a different idea of punishment based model and old transactions being invalidated by secrets rather than time locks was explored. And allowed for much than simply mono or bi-directional payments. You could now build a large scale network capable of very high tx throughput securely and trust minimized.

So proposal was written of the lightning network white paper by Joseph poon and Thaddeus Dryja (Christian Decker produced a paper for a competing payment channel network at the same time) it describes how payment channels enable instant high volume transactions on the second layer using the security of the base layer (we will come back to this later) so Alice (with $10) and (Bob with $10) create a 2-multisigniture entry on to the blockchain and the blockchain only sees a $20 2-multisigniture ledger entry (Channel: a confirmed UTXO between two inputs that will receive 2 outputs) once they have received confirmations on the blockchain they can then tx with each other instantly in any variation back and forth of each other’s total balance amount (many micro transactions before channel becomes unbalanced). Each tx has an agreement and is signed off instantly. each time they sign off on a new state they invalidate the old state (which is also important for later) now we start adding more user and more channels as this back and forth as it is not very useful between 2 people and this is where we start using what is called hash time locked contracts. This is where payments start to get routed through the network via end to end encrypted multihop txs. These are trustless, cryptographically secured by hashes and are conditional, as in payments must go through instantly from sender to receiver across all participants or the payment times out and is cancelled entirely this is enforced without counterparty risk. Now what about when participants close their channels? With the current state contract if say bob goes offline all Alice has to do is broadcast tx to the blockchain and the current state will be settled on chain. Now say if bob wants to be greedy and take some of the balance that is now Alice’s he could try and cheat by using an old state (which is unlikely old states are deleted automatically) this is where some important game theory incentives and development of something called (unfortunately in my opinion) watchtowers. Now watchtowers will see that this old state has been broadcast and not only will Alice get all of her money back but also take all of Bob’s money for being a dick. So the disincentive to even try and cheat is very high. Anyone can and should operate a “watcher node” This is the basic principles of how rules are enforced offchain and settled onchain and note that this is an abstraction there are enforcement mechanisms without watchtowers also. So the underlying layer of the of the bitcoin blockchain provides security via confirmations and is also then enforced via smart contract (this is going to get annoying but we will get to rizun and come back to this later)

“it’s a smart contract dispute resolution mechanism, as decentralised judge that cannot be bribed” Elizabeth Stark

There are different independent implementations with full interoperability and there are many very important developments and arguments over important topics that are being had at the moment and it is currently evolving so fast it’s tough for myself even to keep- up, like static channel backups, onion routed spontaneous payments without need of invoice, tor nodes, HodlHTLC (pre commitment, delayed settlement tx’s), neutrino lite clients, balancing loops, splicing and channel factories but I will get to some of that later, but let’s start breaking down some off Rizun’s bullshit:

He has given many pseudo economic arguments that are fundamentally and technically flawed around block size limits and reiterated his beliefs in in a fee market with unlimited blocksize and artificial limits are wrong. this is off topic but he always manages to bring it up. He states that blocksize limits are production quotas, so the free markets will overcome production quotas. This makes absolutely no sense these are the origins of lots of misconceptions bigblockers try to stack onto each other. He believes the size limit in software is something that people can choose to run or not run or to modify with MIT licence any way they choose and the market will simply converge on a limit and single software. This entire concept is as preposterous as saying the 21million supply cap, or fixed tightening monetary policy or 10 minute block propagation time and so on, is a production quota.

Software is made by limitations and protocols are literally defined by rules. People could decide to make changes or do whatever they want but then would no longer be using the protocol. When you create a protocol you have to restrict with the proposal of standard behaviour and you restrict the actions of the users exchanging information. So the bitcoin protocol must restrict the behaviour of participating parties, not via constraining the market with production quotas and central planning. This is simply how all software protocol layers work, with well-defined rulesets just as the TCP/IP model.

For example Luke Dashjr recently suggested an idea that users shouldn’t use segwit except for non-lightning network tx’s to try and reduce the average size of blocks propagated with the intention of slowing the blockchain bloat (a valid and relevent concern). My objection was that this will never happen and is a pointless endeavour as there is no incentive for users. It’s essentially the same as asking mining pools “nicely” to coordinate to propagate smaller blocks (which they are capable of) again it will never happen. This is why we have rules not imaginary social contracts as people will use the protocol within the constraints given.

Following on from this Rizun always puts forward the idea that there will be an emergent block size even if you don’t impose a protocol level block size limit. While this is technically correct in a “limitless” block size scenario and there will eventually be an effective blocksize equilibrium. But the whole idea is ridiculous without causing impossible initial sync times and crashing of average users nodes which hit bottlenecks ect. Even if you get rid of blocksize limit, difficulty adjustment and leave miners when to propagate whatever they choose they will eventually become constrained by other resources but that equilibrium will be achieved by being perpetually inflationary and constant block orphaning. This would cause serious centralisation pressure in which the whole thing breaks down as Greg Maxwell explained and then Rizun tried to plagiarize. The problem is not whether there will be an equilibrium but which will be the equilibrium if you shape a protocol in such a way and whether this complies with the goals you strive to reach which is obviously going to have severe adverse effects on the network and it’s participants. Rizun assumes incentives are endogenous to the system, for example that miners will do whatever is most economically net positive inside the framework of the bitcoin ruleset. This is not the case and his assumptions for his flawed models are wrong. Miners could do anything they are asked to do either by political force, subpoena, what they are paid by other outside forces to do for malicious purposes. As we have seen with the bcash hash war. Miners could be coerced to do something that can cause themselves to temporarily lose money in order to acquire more power or relevance later. So the game theory assumption is wrong. Most equilibriums will eventually be broken by game theory.

It is important to note that miners don’t actually validate anything they simply work away competing to find a SHA-256 hash of a block below a certain value which becomes increasingly more difficult via brute force searching . It will be the pools or miners full validating nodes that validate transactions and all full nodes can choose whether to reject those blocks (This is exactly as satoshi described, although his terminology of “node” for miners and “client/network node” for full validating node was confusing. Originally cpu full nodes were mining by default). If users cannot run full nodes due to the above misconceptions you can see why there could be a problem, the less connected or more centralised the network is the more it opens up the possibility of eclipse attacks and Sybil attacks. If a 51% attack were to occur in a network where everyone validates the worst case scenario is some double spends and disruption. However if an attacker has control of the entire network during 51% attack because users gave been priced out of running them then the attacker can cause havoc on the network. Stealing funds, inflating the supply or change the protocol just about any way they choose. Rizun often points to his simulated experiments. But the problem with simulations in bitcoin is that they only make sense if you include game theory with attackers and are not necessarily incentivised by those endogenous to the system. A system that is created in order to resist censorship and to flourish especially in case of geopolitical unrest, collapse of central banking ect. The success scenario of bitcoin is close to the scenario of collapse of nation states, monetary policy, central banks, tax revenues and capital controls over borders ect. So if you propose a scenario in which bitcoin could possibly flourish under normal technical constraints that allows you best case scenario connection, bandwidth, CPU power ect it becomes complete nonsense. Because the problem is you designed a system to flourish and succeed in a context where this technological best case scenario is far beyond guaranteed. They could have local network access blocked, require VPN, TOR for every tx or block relayed, you may have geographical network partitions between authoritarian states, geopolitical sanctions, no best case scenario internet connection. Users could be forced to use satellites, mesh networks, radio transmission ect. The more you need bitcoin the more likely it is the system around you has collapsed, the more the financial system around you has collapsed, the more likely that infrastructure around you has collapsed, internet, power grid, blackouts, access to high enough grade hardware, the more censorship is likely, corruption probably rampant and the more you will need something like TOR (which has its own limitations) for both privacy and to maintain a static IP, you need a tor address, you cannot use a dynamic IP or you will suffer with serious connection problems. The benefits of being able to run on low power single board computers and hardware have also been seen recently across Venezuela and people being able to power equipment with widely available low cost resources and simple battery setups. Simulations need to be put through worst case scenario and it is not perfect now with current block size and average block propagated. So none of Rizuns arguments make any sense other than in an ideal world, best case scenario, which would not be an open one. And really his experiments are only used to fit his confirmation bias of flawed narratives. Any blocksize increase today will also break backwards compatibility of the chain and as likely contentious would cause a chain split (or simply just an insecure minority fork that does not follow consensus). It could also be argued that as well as access to low cost hardware that a variety of hardware in itself makes the network more resilient. If it came to the need for not only centralised server farms which would be disastrous in itself but an oligopoly of server hardware is another huge attack vector that could become compromised or backdoored. As we have also seen the effect of larger blocks on other contentious forked chains is also that it forces users to run full nodes on cloud hosted virtual machines which is another serious central point of failure. I strongly advise against this, especially merchants with economic nodes!

There is another issue with larger blocks in that miners can produce what is known as an attack block. Miners can construct a block in such a way that it takes a long time to validate. This validation time grows exponentially longer the larger the block size and can act as a severely centralizing force.

Now we can move on to 0-confirmation transactions which is a narrative that has been repeatedly pushed by Rizun and directly links to our Lightning network topic and important points from earlier. There are to complete contradictions that Rizun has pushed. First he’s argued that 0-conf tx’s are perfectly safe and secondly that lightning network tx’s are somehow unsafe. These are mutually exclusive; the logic simply cannot exist at the same time. 0-conf tx’s are not an example of bitcoin scaling onchain. They are infact defined as offchain. If you argue that 0-conf are safe then you are basically arguing that you do not need a blockchain at all. While it is not impossible in theory to have some system to determine everything of network convergence over a single spending history to prevent the double spending problem without the need for a block confirmation it is unsustainable and why bitcoin succeeded where all other attempts failed. The point here is that 0-conf is an off chain tx with risk attached and a high degree of trust involved. Whereas lightning network tx’s are off chain a smart contract security guarantee to punish possible double spend attempts to a high degree. So rizun needs to choose which is true in his logical inconsistencies or plain dishonesty. And we have recently seen many examples of bad actors getting away with major double spend attacks through merchants allowing 0-conf. This will only become more prevalent as it is more widely known and understood.

The next misconception that rizun repeatedly pushes is something called SPV wallets (simplified payment verification) as it stands these do not exist and likely never will as Satoshi described in the bitcoin whitepaper despite mobile wallet marketing, big blocker narratives and Mike Hearns claims to it. Rizun claims these and the above mentioned points are the:

“solution to bitcoin scaling worldwide and 4 billion users validating their own txs”

…as he has put it, that figure went up to 5 billion recently... The problem is that they do not use trustless validation. Unlike a full node they are trusting miners, they would either follow the heaviest chain or Rizun’s belief that users can rely on trusting “social consensus” (block explorers, social media) in the event of a contentious event and they do not validate anything received. Solutions like bloom filters improve privacy of mobile wallets but can be broken and cause that private information to be scattered over multiple servers, BIP157/158 Golomb-Rice block filters such as the implementation “neutrino” is another development mentioned earlier which is a lite client/client side filter and a major improvement over bloom filters. Again it can enable non-custodial payments and privacy improvements over bloom filters even but still trusts miners validation. The best solution we currently have are where you pair with your own full node, preferably through tor (which will likely be a default in bitcoin core or even implanting bitcoins own encryption layer for pairing) this could still leave you open to isolation attack.

You can begin to see how all these points begin to intertwine and Rizuns narratives unravel together. Now something like neutrino is touted as providing non-custodial onramp to lightning network mobile wallets. And if used correctly could be a powerful tool as it preserves more privacy via client side filtering (node launcher is one example another is wasabi wallet which doesn’t trust miners) But it also can be used incorrectly and marketed in a way that disincentives users to run full node which is damaging to network growth. The other arguments against neutrino are limited and much the same as current “SPV” solutions. It is entirely possible to run a full node today on an android device via ABCore client, which also could be used with mesh networks and other solutions as mentioned earlier for those that need it and could find a way. bitcoin users who push the capabilities of “tinkering” fullnodes with simple hardware and low power single board computers and communication methods prove this today. We can see that all the factors above would be severe compromises and amplify each other’s level of trust. We cannot make any protocol completely trustless but we can add many layers of trust minimization.

“We need more socially scalable ways to securely count nodes, or to put it another way to with as much robustness against corruption as possible, assess contributions to securing the integrity of a blockchain. That is what proof-of-work and broadcast-replication are about: greatly sacrificing computational scalability in order to improve social scalability. That is Satoshi’s brilliant tradeoff” - Nick Szabo

for further reading I would highly recommend his “Money, Blockchains and social scalability”.

There is another issue with onchain scaling that cannot be ignored and is often swept under the rug by on chain scaling proponents and that is the effect of bloat not only in terms of bottlenecks we’ve covered from initial sync,blockchain storage, cpu limitations, memory and bandwidth and it’s effect on people’s access today to limited resources but the long term detrimental effect will have on the rate of growth and all individuals ability to maintain full UTXO set which all nodes need to function. UTXO set growth is flexible but if you cannot maintain it then you would be incapable of running even a pruned node, just as the full blockchain growth could outpace hardware for every users ability to initially sync a full node. Breaking into many small inputs/outputs in large blocks will cause this growth to runaway as explained in much greater detail here.

Rizun has previously tried to put forward that a lightning network could be deployed over Bcash. This is wrong and not just because of Tx malleability. This can even be solved in ways other than Segwit (which is another thing he has demonstrated proven flawed math in order to repeatedly attack and is a source of many other lies). The problem is cost factor. From our description of lightning network we can see that the model of trust minimization and game theory of penalizing any possible old states broadcast relies heavily on full nodes/watcher nodes being in the decentralised manner of the lightning networks growth. We’ve explained the principles of how low cost freely available hardware is made possible on the bitcoin network and how larger blocks forces centralisation of specialised hardware. This is where the security model breaks down and becomes heavily trusted, again bitcoins value proposition is damaged unless it maintains being self-sovereign, non-custodial, individually validated and trust minimized on all layers. The chain is required to be monitored at all times.

Most of Rizuns recent criticisms are embarrassingly flawed and intellectually dishonest. He keeps repeating tired arguments around “loans”, “banking layer” and that it will become centralised in a “hub and spoke” network topology. None of this makes any sense. The more connected nodes are to each other in lightning network the more efficient it becomes and fewer hops that are required routing. His recent usage of an abacus like structure is incredibly over simplified and misleading. The more well connected pathways there are, the less likely there are to be imbalances. The emergence of centralised hubs would not damage the security model of lightning even purely by design but there is no incentive for them to exist.

Lightning network is designed for micro transactions which can be broken down into mSats even. There is actually a disincentive for centralised hubs as this will require a large amount of capital to be used and their hub will suffer imbalances eventually and result in them being isolated for poor routing as they become more of an “edge” poorly connected node they will be forced to close their channels. There are different trade-offs to using lightning network vs onchain txs as described it is essentially a hot wallet and meant for payments and it’s scaling and network topology is only confined by the bitcoin blockchain that it relies heavily on. But it is meant to complement each other. At such an early stage there are still things we do not know. For instance how the effect of channel opening/closing will determine demand for onchain blockspace. Bitcoins limited blockspace is valuable and lightning network allows a much more efficient balance of its usage between micro transactions offchain and those settles onchain. Contrary to our earlier description there will inevitably be a balance of its usage and self-determined equilibrium. This is beneficial to the scalability of the bitcoin network and the difference in incentive based usage could provide price inelastic demand for blockspace but this is something that is out of our control and an unknown variable. Lightning network is growing at such a pace some of these effects may already be understood much more at the time of reading. Essentially lightning is simply bitcoin with optimised smart contract utility. The amount required to be transferred will have a point where one will be more efficient than the other.

As of yet I have only seen one sound criticism of lightning network and it isn’t really his own, he actually leveraged an old point from Thaddeus Dryja he found recently. This is although we currently describe lightning network HTLC as trust minimized above, bitcoin on chain has what’s known as a dust threshold. This is set at 547 Sats (0.00000547 BTC) by default, anything below this limit will be considered dust and invalided and will not be propagated by miners. The reason being is that fees will actually be larger than the value transferred. If you allowed these inputs it could also make it easier for the network to be spammed. However you can begin to see that this affects our game theory of enforcing settlement if below the threshold limit. In today’s terms this is a miniscule figure of somewhere around 2cents and under but if we are talking about our world’s monetary and fiscal doomsday scenario and bitcoin succeeding could be of significant higher value. Peter didn’t actually understand this as it was first explained to him. He misinterpreted that Tadge was referring to direct channel counter parties which isn’t really a problem and perfectly enforceable providing either party doesn’t have a sub dust limit output. The problem comes if Alice decides to close her channel with Bob while Bob has say a 1sat HTLC routed payment pending then Bob risks losing his 1sat as it will end up the tx being created without the HTLC output, but the amount of the output is given to fees if the tx goes onchain. In my opinion this really isn’t that big of a problem and a fairly unlikely scenario That requires some level of trust with your direct channel counter party participant. But as we know payments end to end are near instant (unless a routing node becomes unresponsive and the payment is required to wait to become unstuck) and conditional as in they must go through every hop or time out so would in most cases require some very great timing or a very high frequency of HTLC’s. There are proposed solutions to this such as probabilistic payments these are explained in further detail by David Harding and Greg Maxwell here. However this is of very low priority at this early stage. It’s worth noting that onchain tx below cannot be spent and the dust limit is not a consensus rule and could be changed via local node configuration or default removed via softfork. Rizun uses an example of a situation of:

Let’s explore the ridiculousness of this scenario. Ignoring that he is conflating fees and dustRelayFee. For this scenario to become trusted on lightning network. We can see it has to be under the dust limit threshold. The dust limit which is calculated 3000sats per kb or around 547 sats must equal $100 but because this is a segwit tx on ligjtning it’s closer to 294 sats which must equal $100. Just to reiterate this is currently around 1cent! But as we can remove this either locally via node policy rules or network wide removed as default even if it becomes a problem which leaves us with 1sat must equal $100. This is a limitation where we would have to increase the divisibility of bitcoin as 1sat onchain is it’s lowest possible unit even if much lower units are possible on lightning network. This is something that needs to be addressed on chain and we see probabilistic payments can be used in different ways above. These issues effect any of Rizuns concepts of scaling on chain also.

Rizun’s arguments that do not centre around channel balancing and fees are very tenuous. For example one major benefit of lightning network is huge benefits to user privacy and many advancements can be made into increasing Tx anonymity set and efficiency both onchain opening channels and offchain. Rizuns most recent arguments are that this will lead to an increase in AML/KYC and users being de-anonymised. I don’t really believe it is worth exploring these paper thin arguments around international money transmission regulations as we have discussed it really is not what Bitcoin, lightning network or any cypherpunk ideals are about and we do not seek permission.

There are in fact many problems with the current lightning network as it exists today that developers have openly talked about and discussed possible solutions, work around and improvements. It would be wrong to claim lightning network is currently some “enterprise ready” solution at this early stage not that that is it’s end goal. Rizun and bigblockers in general have simply been too preoccupied with conspiracy theoies to provide any intellectual challenge with arguments against lightning network. Bitcoiners remain bitcoin and lightning networks best critics. So let’s explore some real critisms:

Even though lightning uses source based Onionrouting similar to tor and by design comes with huge benefits to privacy, current poor usage practices mean that people are revealing their own IP’s especially when set as a routing node. Tor nodes are of great benefit here, both sending and receiving with a high degree of privacy with only a .onion address but currently suffer from limited ability to balance liquidity simply down to lack of other Tor node peers. A solution could be to balance your own tor node before connecting to any other peers. Opening private channels seperate from the rest of the network has other benefits of being able to agree on your own rule sets and increased privacy

similar to tor and by design comes with huge benefits to privacy, current poor usage practices mean that people are revealing their own IP’s especially when set as a routing node. are of great benefit here, both sending and receiving with a high degree of privacy with only a but currently suffer from limited ability to balance liquidity simply down to lack of other Tor node peers. A solution could be to balance your own tor node before connecting to any other peers. Opening private channels seperate from the rest of the network has other benefits of being able to agree on your own rule sets and increased privacy As discussed earlier neutrino is a great privacy improvement but the promotion of non-self-validating clients or even custodial models is damaging to the network as a whole and lightning mobile wallets has revived many old incorrect narratives. This is a social attack vector and these solutions do not seek permission. wallet developers are the problem. The only solution is education and importance of running full node is its own incentive. TariLabs have produced a much deeper dive into SPV and fraud-proofs here

On a similar note permissioned, centralised, custodial wallets are in no way of benefit to the user. You are not using bitcoin or lightning. The service may or may not be your behalf

using bitcoin or lightning. The service may or may not be your behalf There are trade-offs to security between on chain and off chain tx’s and lightning network hot keys. There isn’t any solution to this other than best practice and managing funds. It is the whole purpose of layered protocols. Security of lightning will continue to improve into the future but the trade offs to very high security of onchain txs will remain

To receive funds in LN you must be online (although it is possible to recieve to your node remotely with preimage but this will timeout) and if your peer force closes your channel funds are not send to your mnemonic seed. This will be solved by removing the random value in the keys that are not part of the seeds but reliable hardware for routing nodes is important and also static channel backups and other backup improvements to recover channels and restore previous state

and other backup improvements to recover channels and restore previous state In these early stages developers and bitcoiners have repeated that LN is still experimental and you can lose your funds. The term reckless gained traction and people took the risk which supports its development, finding problems and stress testing the network. A soft limit was put in place which was dubbed the “Wumbo” limit (0.1677BTC per channel and 0.042 per Tx) there is good reason for this but many have used it as a narrative against lightning network. There will be an opt in higher limit but critics selectively seem to forget that these values in bitcoin could be much higher in the future

One issue with LN in its current state is channel balancing. One solution that has just been released is a feature called ‘ loop ’. It utilizes what are known as submarine swaps and allows users to move funds in a trust minimized manner to create more inbound capacity and receive more to their channel in increasing increments without closing and reopening. Eventually it will be capable of the reverse to refill channels

’. It utilizes what are known as and allows users to move funds in a trust minimized manner to create more inbound capacity and receive more to their channel in increasing increments without closing and reopening. Eventually it will be capable of the reverse to refill channels Onchain fees are a regular criticism that makes little sense. There must be a fee market in order to maintain bitcoins value proposition. LN essentially allows batching of many Tx and settling them on chain without any change to the base layer protocol is the whole concept. Improvements in efficiency and privacy can be made through for example Schnorr signatures

Fees will rise onchain but the entire narrative of previous high fees and average fee estimates are completely blown out of proportion. Not only was there coordinated spam but also txs containing many inputs and outputs and also likely to have been from user requiring it to be confirmed in the next block accepted a premium and caused major outliers at the time of average peak fees. With current uptake of segwit, lightning network and ongoing development and possible implementations such as schnorr signature aggregation this is unlikely any time in the near future. Schnorr together with Taproot also has large privacy improvements and as well as coinjoins being more efficient and look like regular txs, it can make our lightning 2-multisigniture entry look like a regular entry

this is unlikely any time in the near future. Schnorr together with Taproot also has large privacy improvements and as well as coinjoins being more efficient and look like regular txs, it can make our lightning 2-multisigniture entry look like a regular entry Another issue is what is someone doesn’t accept lighting payments or even only larger lightning payments? There are already solutions to these too. Splicing will allow you to “splice” out to make direct onchain payments and there are multiple ways of refilling your balance without closing your channel. This can be combined with another development called part payment channels or atomic multi-path payments which allow us to make larger payments on lightning spread across multiple channels with little effect on liquidity balancing and even improves privacy. More importantly this allows wallets to be created with a much simpler UX for the average user. Even in such a way that only a single balance would be visible and the user could choose to have the payment simply sent in the most efficient way possible seamlessly without even knowing via autopilot features. Part payment channels are also relevant to our dust topic. You could use them to close a channel with a peer and exhaust remaining funds below the dust limit as part of a payment without losing anything

will allow you to “splice” out to make direct onchain payments and there are multiple ways of refilling your balance without closing your channel. This can be combined with another development called or which allow us to make larger payments on lightning spread across multiple channels with little effect on liquidity balancing and even improves privacy. More importantly this allows wallets to be created with a much simpler UX for the average user. Even in such a way that only a single balance would be visible and the user could choose to have the payment simply sent in the most efficient way possible seamlessly without even knowing via features. Part payment channels are also relevant to our dust topic. You could use them to close a channel with a peer and exhaust remaining funds below the dust limit as part of a payment without losing anything We still haven’t completely addressed how funding of channels can be further scaled further but this was where a proposal known as generic multi party channels or a specific type called channel factories can make vast improvements both to splicing in to existing channels and creating new channels. It is essentially a layer between the bitcoin network and lightning network or sub channels that allows a group of peers to Reduce onchain footprint further with only 2 onchain txs. These were described in Christian Decker’s original competing proposal to lightning mentioned earlier. They currently have other trade-offs which can be solved but to put into perspective the reduction they can have on blockspace usage combined with Schnorr:

“For a group of 20 users with 100 intra-group channels, the cost of the blockchain transactions is reduced by 90% compared to 100 regular micropayment channels opened on the blockchain. This can be increased further to 96% if Bitcoin introduces Schnorr signatures with signature aggregation.”

With Schnorr all peers sign the inputs, signatures are then aggregated and the joint signature proves the last peer has signed allowing alot of unnessessary and identifying data to be removed. A public key and signature onchain would then be indistinguishable whether it is actually a standard tx, coinjoin, coinswap, n-of-n multisig, opening lightning/multi-party channels or taproot/graftroot. More on Schnorr here

To summarise the idea that bitcoin need lightning network is a complete misconception. Bitcoin is censorship resistant globally transportable store of value. It would take an obscene amount of confirmations on any alternative chain to equal the security of just a single bitcoin confirmation and the gap grows constantly wider. “the blockchain” is a very inefficient way of solving the Byzantine generals problem but proof of work is the only mechanism that cannot just be replicated and no other proof of work can provide the level of security of Bitcoins network. In order for bitcoins compounding security to remain self-sustaining and txs censorship resistant there must be ongoing demand or blockspace and a tx fee market must develop to incentivise miners, lightning improves this as mentioned earlier by providing price inelastic demand for blockspace and onchain fee premiums are for accelerated confirmations. Inflation will never be an option. This is completely disconnected from lightning’s own fee market which is liquidity based and per Sat routed as opposed to blockspace onchain. The demand lightning provides to efficiently settle onchain and fill blockspace also disincentivizes spam attacks. These can come from multiple bad actors such as dust attacks, miner pay outs, network freeriders such as those trying to leverage bitcoins security for their own application or simply poor user practices.

Recent comments of people like Peter Rizun and Emin Gun Sirer equating lightning to “semi-custodial/full-custodial banking “, or Craig Wright’s “IOUs” is not grounded in any reality and probably stem all the way back to Mike Hearn’s and Gavin Andreson’s damaging ideas. They seem to believe that the bitcoin network can remain “cheap” forever, even with an ever rising price, tightening monetary policy and fixed supply and completely ignoring that supermajority of miners have capability to orphan large blocks or leave out txs by calculating profitability and will lead to complete centralisation of mining and servers as seen on alt chains. This is not what lightning network solves but avoids; it works in balance with the developing fee market without compromising any security of the base layer and settlement has greater value to the user through greater transactional utility to blockspace. Miners are only one half of bitcoins consensus mechanism. We do not trust miners, we verify. Users can reject any changes to the ruleset and this must be protected at all costs and any change must be non-contentious. As the bitcoin network grows the likelihood of reaching consensus for changes diminishes and become more likely to largely ossify. This is coupled with its ever compounding security is what makes bitcoin unreplicable true digital scarcity. Lightning network even though possible to deploy on alternative chains is also unlikely to be replicated mostly due to breaking its game theory, liquidity economy and the possibility of cross chain atomic swaps being gamed via time delay to profit or reject the payment . The base layer of the bitcoin protocol and network effect is what lighting network needs. Lightning could also act as a bridge to other sidechains like elements based liquid network with additional features via atomic swaps. But one thing we know for a fact is that raising the blocksize will inevitably have a severe adverse affect on network topology and decentralisation. Those happy to take on technical debt today will cause bitcoin to become a failed project tomorrow. Run full node, validate your own transactions and increase your privacy as well as contributing to your own and network security and immutability. And anyone who tells you full nodes do nothing or don’t exist is a fraud…Craig

The real satoshis views changed in the short time frame he was online. Here’s one of his last public posts:

“Bitcoin users might get increasingly tyrannical about limiting the size of the chain so it’s easy for lots of users and small devices”-satoshi nakamoto

Bitcoin onboards more users constantly than all other alternatives combined. It’s network effect is unrivalled and users are bitcoins global immune system against political attack vectors!

The future of big blocker initial node syncing. 5petabytes over 747 hard drives shipped round the world for the first image of a black hole would have taken years to transfer with the current internet infrastructure

Check out these guides (which will likely be updated) on how to set up your own nodes from:

StopAndDecrypt (Linux based full node) & (more on how full nodes and mining work)

grubles (c-lightning over TOR)

Stadicus (SBC fullnode+LN Nodes and tor guide)

Pierre Rochard (node launcher+zap)

BTCPay Server (full node+LN for merchant processing)

Jameson Lopp (endless rabbit hole resource archive)

If you enjoyed part 1 of my toxic bitcoin series feel free to tip a pirate a beer via paynym +noisyfog046 (part 2 will explain all)

Update: Stephan Livera did an interview on the subject with Joost Jager