Doing block header checking in parallel
For now, all hashes are downloaded and validated one-by-one in a linear way. It is fast enough for coins like Bitcoin where sha256d is quite fast to calculate. But for CPUchain, we have cpupower, which is much more complicated. So, my proposal is to implement it in a different way, for example we could validate many hashes in many different threads and do that in parallel, as it is currently done for transactions.
Validating Block's headers in parallel will be faster, but it's more complex. There are two kinds of checks, non-contextual and contextual. While the first can be easily done in parallel, the second type needs a more contextualized solution. Could you elaborate on the procedure? Best regards, -- Ricard Civil 12 May 2021, 19:07 by vjudeu@gazeta.pl:
For now, all hashes are downloaded and validated one-by-one in a linear way. It is fast enough for coins like Bitcoin where sha256d is quite fast to calculate. But for CPUchain, we have cpupower, which is much more complicated. So, my proposal is to implement it in a different way, for example we could validate many hashes in many different threads and do that in parallel, as it is currently done for transactions. _______________________________________________ Cpuchain-dev mailing list -- cpuchain-dev@mailman3.com To unsubscribe send an email to cpuchain-dev-leave@mailman3.com
While the first can be easily done in parallel, the second type needs a more contextualized solution. Could you elaborate on the procedure?
I think we should handle it in a similar way as in transaction verification. Having some chain of blocks A->B->C->D we can verify block D in one thread and block C in another thread. We can verify multiple blocks in parallel and as long as everything is ok, we would get a speedup. But: it is possible that block A is invalid, and then we should reject blocks B, C and D later. But assuming that bandwidth is not a bottleneck, we could for example allow downloading some data without checking everything and reject that later if needed. For example, if we have 4 threads, we could verify each four blocks separately and later join results from them. Having full context is not needed. For example: if we want to verify block D without checking previous blocks, we can assume that our transaction inputs are correct. We store that assumptions somewhere. Later, when block C verification ends, we can collect all UTXO's created in block C and match them with assumptions we made for block D. Of course it is possible that not all assumptions will be cleared, but after clearing all assumptions for some block, we could safely say that this block is valid. And if finally we would have all assumptions for all blocks cleared, then we would have everything validated. More than that: if there would be some long, but invalid chain, we could create SPV proofs for other nodes and share them, speeding up stale chains rejection even further. But to start with, block verification should be optimized.
I am assuming that we are talking about the validation of the block header and not the whole block. Block headers need more contextual assumptions for example, but not limited to, timestamp correctness and nbits. For example checks that could be done in parallel, without the context, are the hash computation of the header and the validation that the hash is equal to or lower than the target encoded in nbits, this check must assume that the target value in the header in question is correct. Other non-contextual checks can be done in parallel. Best regards, -- Ricard Civil 13 May 2021, 06:02 by vjudeu@gazeta.pl:
While the first can be easily done in parallel, the second type needs a more contextualized solution. Could you elaborate on the procedure?
I think we should handle it in a similar way as in transaction verification. Having some chain of blocks A->B->C->D we can verify block D in one thread and block C in another thread. We can verify multiple blocks in parallel and as long as everything is ok, we would get a speedup. But: it is possible that block A is invalid, and then we should reject blocks B, C and D later. But assuming that bandwidth is not a bottleneck, we could for example allow downloading some data without checking everything and reject that later if needed. For example, if we have 4 threads, we could verify each four blocks separately and later join results from them.
Having full context is not needed. For example: if we want to verify block D without checking previous blocks, we can assume that our transaction inputs are correct. We store that assumptions somewhere. Later, when block C verification ends, we can collect all UTXO's created in block C and match them with assumptions we made for block D. Of course it is possible that not all assumptions will be cleared, but after clearing all assumptions for some block, we could safely say that this block is valid. And if finally we would have all assumptions for all blocks cleared, then we would have everything validated.
More than that: if there would be some long, but invalid chain, we could create SPV proofs for other nodes and share them, speeding up stale chains rejection even further. But to start with, block verification should be optimized. _______________________________________________ Cpuchain-dev mailing list -- cpuchain-dev@mailman3.com To unsubscribe send an email to cpuchain-dev-leave@mailman3.com
participants (2)
-
ricardcivil@tuta.io
-
vjudeu@gazeta.pl