My general opinion on the matter is “data or STFU.” The motivation for this debate is to help Bitcoin scale up in both the transaction rate and transaction volume. Will making the blocks bigger actually address this in a consequence-free manner, or will it just push the problem down the road, and result in more blockchain bloat and fewer full nodes along the way? I don’t know–as far as I know, there’s no empirical data on how well an XT network performs compared to a Bitcoin network under similar conditions. Until I see data, I’m calling BS.

Part of the reason I’m skeptical is because distributed systems typically scale by growing sub-linearly with the scaling parameter. For example, DHT nodes do not store every single piece of data, or even routes to every single node; they each store O(k/n) data and O(log k) routes for n nodes and k records. As another example, Internet routers do not store routes to every single publicly-routable host; they assign route prefixes to their neighboring routers, and send packets along the interface with the longest IP prefix match (leading to O(log n) expected number of routing hops for n hosts, and O(1) memory per router).

This does not appear to be the case with Bitcoin XT. Bitcoin XT’s might be able to do better than Bitcoin by a constant factor. But, the XT approach is like trying to scale a small computer network by buying a bigger switch and making everyone’s ARP table bigger. The system might bear a little bit more load for a proportional resource investment (and probably with unintended consequences), but it won’t solve the underlying scalability limitation (feature?) that everyone still needs to mine the same blocks, no matter how big they get or how often they are added.

So, I need to see some extraordinary results from the XT proposal before I believe that it solves the scalability problem, especially if the proposed solution is to do better by a constant factor.

EDIT: clarity