Note: I forgot to include on the proposal, the reduction of MAX_BLOCK_SIGOPS_COST network rule to 80000.


Sure, worst cases are rather theoretical. Real problem aren't 1GB blocks, taking into account current technologies. But are, those big block sizes which are viable to be fairly added on the blockchain and not necessarily from attackers. The thread was more oriented in terms of decentralization and sustainability rather than network attacks. In my opinion, your arguments about the inviability of a hypothetical attack, which involves huge blocks, are reasonable. I share your opinion that block time acts as a regulator of viable block sizes.

About validation time, I think that we should explore ways to do batch verifications. The Schnorr's signature variant R supports that type of validations. We should start the proper thread because Schnorr signatures have multiple interesting applications.

I agree that we should launch and maintain a testnet, this should motivate another thread to coordinate it.




Best regards, 

--
Ricard Civil

Sent with Tutanota, the secure & ad-free mailbox:
https://tutanota.com



4 Apr 2021, 09:33 by garlonicon@onet.pl:
As I said before on Discord, I think that limits should be kept as they are now. Why? Mainly because increasing block size is harder than decreasing it. Miners can apply 1 MB for non-segwit and 4 MB for Segwit limit independently in their own blocks. As long as the traffic is small, nobody will see any difference. Another thing is also block propagation time, it is affected by block upload and download time. Even if some attacker has very fast connection, other nodes could have slower setups. So, even if someone will try to make 1 GB block and propagate it, there is quite high chance that such block will be quickly stale, because other nodes will process some smaller block faster, and then will start building on top of that smaller block faster. And because all nodes processing such huge block have to repeat it over and over again to all other nodes, it will cost them 1 GB per each node that does not have it.

So, if blocks will be too big and if someone will start attacking the network by producing 1 GB blocks now, then all mining pools and solo miners can just limit their own blocks to 1 MB. Then, on average, 1 GB blocks will be created only by that attacker (or group of attackers) and as long as this group will have less than 50% mining power, then on average there will be 0.5 GB block size in the worst case (and 2 GB if Segwit, but that 4 GB limit is rather theoretical than practical, because making a lot of signatures is needed to really reach it in practice).

Also, even if the attacker will make 4 GB block, then it will take a lot of time not only to download that block, but also to validate it. Validation can be a huge bottleneck. I heard that validating the biggest BTC block takes around 10 seconds on average PC. We have 60 seconds block time. So, validating that 4 GB CPU block will be 1000 times slower. So, spending 10000 seconds per block on average will be long enough to discourage nodes from building on top of such big blocks.

In practice, to really benefit from that big blocks, improvements should be made to validate blocks faster than today. And as long as most of the network consist of non-upgraded nodes, even if the attacker can create huge blocks and validate it faster than the rest of the network, other nodes still will spend a lot of time for downloading and even more time on validating that block. Probably no serious solo miner or serious mining pool will start building on top of not-yet-validated-block, so there is a huge risk that big blocks will be stale.

So, in my opinion the network is well protected from that kind of attack now, and having such huge limit will just make it easier in the future, when we will make validation time faster. But to really prove or disprove that, it should probably be tested in practice. Maybe not on mainnet, but we have testnet (that is inactive, but we could activate it) or signet (even better than testnet, which could also be activated and used instead of the testnet). For unknown reasons, regtest difficulty is not minimal, like in Bitcoin, I think that should also be fixed in future releases. By the way, as far as I know, all CPUchain test networks are not used, so we can fix that quite easily.
_______________________________________________
Cpuchain-dev mailing list -- cpuchain-dev@mailman3.com
To unsubscribe send an email to cpuchain-dev-leave@mailman3.com