Block size, sustainability and centralization implications
This post is to expose the current situation and begin the discussion about the block size as well to expose my opinion. Current Cpuchain's block size is 4000000000 Bytes for segwit and 1000000000 Bytes for those nodes that don't support segwit. This size affects to multiple sides of the protocol like the maximum amount of data to be stored and transmitted. Data storage and Network bandwidth are two aspects that could potentially affect to the decentralization of the network because they increase the cost of running a full node. This block size is part of Cpuchain's roadmap to achieve 80k transactions per second. From the original whitepaper: "As we believe in self-sustainable cryptocurrency that can last for more than hun-dred years, we take the scaling issue seriously and thus we’ve implemented a scaling solution that can cover up to 80K transactions per second, which exceeds current visa’s 50k tps capacity." As far as I have seen on the repository, the commit c82cee8 <https://github.com/cpuchain-core/cpuchain/commit/c82cee8f7d9dc44e75bf95470e9...> only changes constant limits of block size, weight and sigops on consensus.h <https://github.com/cpuchain-core/cpuchain/commit/c82cee8f7d9dc44e75bf95470e9...> and consensus.nPowTargetSpacing on chainparams.cpp <https://github.com/cpuchain-core/cpuchain/commit/c82cee8f7d9dc44e75bf95470e9...>. While these limits will allow more transactions among other benefits, they will imply a huge bockchain size and a really big amount of bandwidth usage. cpuchain block size (disk and wire) nonSegwitBS 1000000000 Bytes 1GB segwitBS 4000000000 Bytes 4GB Worst cases block size: minsInDay = 1440 worstInDayNoSeg = nonSegwitBS * minsInDay = 1440000000000 Bytes 1.44TB worstInDaySeg = segwitBS * minsInDay = 5760000000000 Bytes 5.76TB worstInMonthNoSeg = nonSegwitBS * minsInDay * 30 = 4.32 * 10^13 Bytes 43.2TB worstInMonthSeg = segwitBS * minsInDay * 30 = 1.728 * 10^14 Bytes 173TB worstInYearNoSeg = nonSegwitBS * minsInDay * 365 = 5.256 * 10^14 Bytes 526TB worstInYearSeg = segwitBS * minsInDay * 365 = 2.102 * 10^15 Bytes 2.1PB As seen in hypothetical worst cases, in one day a nonsegwit node will require 1.44TB of both, storage and bandwidth. In a month the disk usage will increase in 43.2TB and in a year 526TB. Bandwidth and storage are only two of multiple sides direct or indirectly affected by the block size, we should not forget that these blocks will require more compute power to handle and process them. These facts increase the cost to maintain a full node and reduce the number of users whom can run it. This will be translated to a centralized network in nowdays, with current storage devices prices and network's common limits. In further reading I've linked some posts where block size limit is discussed, having it, of course, its pros and cons. On the other side, there are pros for this block size. A bigger number of total transaction fees, making mining more profitable (if not taking into account the costs of running a full node with these specs). This size allows more transactions per second, having a theoric capacity 10000 times bigger than bitcoin (cpuchain's block size is 1000 bigger and block time is 10 times smaller). ConclusionIn my opinion, I think that this block size isn't sustainable nor efficient with the current implementation. This size potentially makes Cpuchain more vulnerable to centralization. While there isn't any solution to handle storage and bandwidth requirements, I propose to reduce it to 1000000 Bytes for nonsegwit implementations and 4000000 Bytes for segwit implementations. Having a payment network 10x faster than bitcoin. Recall that off-chain solutions are a workaround to allow more tps. Which is your opinion about this topic? Further reading (not added in any specific order): https://github.com/bitcoin/bips/blob/master/bip-0141.mediawiki https://github.com/bitcoin/bips/blob/master/bip-0144.mediawiki https://medium.com/segwit-co/why-a-discount-factor-of-4-why-not-2-or-8-bbceb... https://en.bitcoinwiki.org/wiki/Block_size_limit_controversy https://bitcoin.stackexchange.com/questions/36085/what-are-the-arguments-for... https://coinzodiac.com/bitcoin-block-size-argument/ https://en.bitcoin.it/wiki/Off-chain_transactions https://cpuchain.org/assets/v1.pdf https://bitcoin.stackexchange.com/questions/67760/how-are-sigops-calculated https://github.com/cpuchain-core/cpuchain/blob/master/src/consensus/consensu... https://github.com/cpuchain-core/cpuchain/commit/c82cee8f7d9dc44e75bf95470e9... -- Ricard Civil Sent with Tutanota, the secure & ad-free mailbox: https://tutanota.com
As I said before on Discord, I think that limits should be kept as they are now. Why? Mainly because increasing block size is harder than decreasing it. Miners can apply 1 MB for non-segwit and 4 MB for Segwit limit independently in their own blocks. As long as the traffic is small, nobody will see any difference. Another thing is also block propagation time, it is affected by block upload and download time. Even if some attacker has very fast connection, other nodes could have slower setups. So, even if someone will try to make 1 GB block and propagate it, there is quite high chance that such block will be quickly stale, because other nodes will process some smaller block faster, and then will start building on top of that smaller block faster. And because all nodes processing such huge block have to repeat it over and over again to all other nodes, it will cost them 1 GB per each node that does not have it. So, if blocks will be too big and if someone will start attacking the network by producing 1 GB blocks now, then all mining pools and solo miners can just limit their own blocks to 1 MB. Then, on average, 1 GB blocks will be created only by that attacker (or group of attackers) and as long as this group will have less than 50% mining power, then on average there will be 0.5 GB block size in the worst case (and 2 GB if Segwit, but that 4 GB limit is rather theoretical than practical, because making a lot of signatures is needed to really reach it in practice). Also, even if the attacker will make 4 GB block, then it will take a lot of time not only to download that block, but also to validate it. Validation can be a huge bottleneck. I heard that validating the biggest BTC block takes around 10 seconds on average PC. We have 60 seconds block time. So, validating that 4 GB CPU block will be 1000 times slower. So, spending 10000 seconds per block on average will be long enough to discourage nodes from building on top of such big blocks. In practice, to really benefit from that big blocks, improvements should be made to validate blocks faster than today. And as long as most of the network consist of non-upgraded nodes, even if the attacker can create huge blocks and validate it faster than the rest of the network, other nodes still will spend a lot of time for downloading and even more time on validating that block. Probably no serious solo miner or serious mining pool will start building on top of not-yet-validated-block, so there is a huge risk that big blocks will be stale. So, in my opinion the network is well protected from that kind of attack now, and having such huge limit will just make it easier in the future, when we will make validation time faster. But to really prove or disprove that, it should probably be tested in practice. Maybe not on mainnet, but we have testnet (that is inactive, but we could activate it) or signet (even better than testnet, which could also be activated and used instead of the testnet). For unknown reasons, regtest difficulty is not minimal, like in Bitcoin, I think that should also be fixed in future releases. By the way, as far as I know, all CPUchain test networks are not used, so we can fix that quite easily.
Note: I forgot to include on the proposal, the reduction of MAX_BLOCK_SIGOPS_COST network rule to 80000. Sure, worst cases are rather theoretical. Real problem aren't 1GB blocks, taking into account current technologies. But are, those big block sizes which are viable to be fairly added on the blockchain and not necessarily from attackers. The thread was more oriented in terms of decentralization and sustainability rather than network attacks. In my opinion, your arguments about the inviability of a hypothetical attack, which involves huge blocks, are reasonable. I share your opinion that block time acts as a regulator of viable block sizes. About validation time, I think that we should explore ways to do batch verifications. The Schnorr's signature variant R supports that type of validations. We should start the proper thread because Schnorr signatures have multiple interesting applications. I agree that we should launch and maintain a testnet, this should motivate another thread to coordinate it. Best regards, -- Ricard Civil Sent with Tutanota, the secure & ad-free mailbox: https://tutanota.com 4 Apr 2021, 09:33 by garlonicon@onet.pl:
As I said before on Discord, I think that limits should be kept as they are now. Why? Mainly because increasing block size is harder than decreasing it. Miners can apply 1 MB for non-segwit and 4 MB for Segwit limit independently in their own blocks. As long as the traffic is small, nobody will see any difference. Another thing is also block propagation time, it is affected by block upload and download time. Even if some attacker has very fast connection, other nodes could have slower setups. So, even if someone will try to make 1 GB block and propagate it, there is quite high chance that such block will be quickly stale, because other nodes will process some smaller block faster, and then will start building on top of that smaller block faster. And because all nodes processing such huge block have to repeat it over and over again to all other nodes, it will cost them 1 GB per each node that does not have it.
So, if blocks will be too big and if someone will start attacking the network by producing 1 GB blocks now, then all mining pools and solo miners can just limit their own blocks to 1 MB. Then, on average, 1 GB blocks will be created only by that attacker (or group of attackers) and as long as this group will have less than 50% mining power, then on average there will be 0.5 GB block size in the worst case (and 2 GB if Segwit, but that 4 GB limit is rather theoretical than practical, because making a lot of signatures is needed to really reach it in practice).
Also, even if the attacker will make 4 GB block, then it will take a lot of time not only to download that block, but also to validate it. Validation can be a huge bottleneck. I heard that validating the biggest BTC block takes around 10 seconds on average PC. We have 60 seconds block time. So, validating that 4 GB CPU block will be 1000 times slower. So, spending 10000 seconds per block on average will be long enough to discourage nodes from building on top of such big blocks.
In practice, to really benefit from that big blocks, improvements should be made to validate blocks faster than today. And as long as most of the network consist of non-upgraded nodes, even if the attacker can create huge blocks and validate it faster than the rest of the network, other nodes still will spend a lot of time for downloading and even more time on validating that block. Probably no serious solo miner or serious mining pool will start building on top of not-yet-validated-block, so there is a huge risk that big blocks will be stale.
So, in my opinion the network is well protected from that kind of attack now, and having such huge limit will just make it easier in the future, when we will make validation time faster. But to really prove or disprove that, it should probably be tested in practice. Maybe not on mainnet, but we have testnet (that is inactive, but we could activate it) or signet (even better than testnet, which could also be activated and used instead of the testnet). For unknown reasons, regtest difficulty is not minimal, like in Bitcoin, I think that should also be fixed in future releases. By the way, as far as I know, all CPUchain test networks are not used, so we can fix that quite easily. _______________________________________________ Cpuchain-dev mailing list -- cpuchain-dev@mailman3.com To unsubscribe send an email to cpuchain-dev-leave@mailman3.com
participants (2)
-
garlonicon@onet.pl
-
ricardcivil@tuta.io