Git Product home page Git Product logo

Comments (2)

giaki3003 avatar giaki3003 commented on August 19, 2024

Nice!

from difficulty-algorithms.

zawy12 avatar zawy12 commented on August 19, 2024

This is a copy-paste of Kaspa issue 1691 that I replied to. It discusses issues surrounding the necessary difficulty algorithm.

The current Kaspa DAA is

T = targetTimePerBlock
N = windowSize
target = avgTarget * ((MaxTimestamp - MinTimestamp) / (T * N))

The options to make it faster in computation are to use WTEMA or a variation of Digishield.

WTEMA is "relative" ASERT's very close cousin, which only requires the parents and grandparents. It is very close to "relative" ASERT by using the e^x =~ 1+x approximation for small x. "Relative" means using the grandparent instead of a reference block in the "absolute" ASERT at the bottom of this comment. Here's WTEMA for a normal POW chain.

ST = solvetime between parent and grandparent
next_target = prior_target * (1+ ST/T/N - 1/N)

To see how it works, if ST = T there is no adjustment and then notice the outcomes if ST>T or < T. ETH uses a variation of this. You can expand the above equation out to use integer math. A high N =1320 will accumulate error if nBits is not higher precision than in BTC. BTC's nBits lowest precision range is 2^(-15) and the error is then N/2 * 2*(-15) = 2% in the solvetime for N=1320.

N=1320 for ASERT and WTEMA have the same stability as N=2640 in Kaspa's current SMA and they are faster to respond to hashrate changes.

If the ST in WTEMA is too small to have good accuracy, then WTEMA can be modified in a way that makes it like Digishield. This also happens to be a way to modify the current algo that requires a lot less computation and greatly reduces possibility of oscillations. If you have to go N blocks in the past for WTEMA to get a large enough ST or want to make the current SMA 10x more efficient by going from N=2640 to N=2640/10 = 264, then the equation would be (leaving out PPD and PD adjustments):

N = 264
timespan = Max - Min timestamps which are N+1 blocks apart.
M = 10 = "filter", "buffer", "tempering", or "dilution" factor.  Notice M=1 makes it an SMA.
target = avgTarget * (1+timespan/(N*T*M) - 1/M)

Again, multiply it out to use integer math. This has same stability as Kaspa's current SMA.

WTEMA and Kaspa above are not targeting solvetimes after a miner 1st sees a block, but the propagation delay plus the solve time. But that's not correct either because propagation delay has a non-linear effect on DAG width which depends on how close it is to the miner's solvetime, and DAG width has a linear effect on the difficulty algorithm in the opposite direction as the propagation delay. Propagation delay alone decreases difficulty because of the longer apparent solvetime, but as it gets closer to the solvetime (a DAG is needed precisely because propagations delays are "close" to solvetimes), the extra DAG width makes the averaging window in Kaspa go back less in time (smaller Max-Min timespan) which increases difficulty (decreases target) which pulls propagation delays back away from from the total solvetime, decreasing DAG width. It's a negative feedback that seems to work out OK towards a balance, except you are not controlling DAG width or the apparent solvetime (aka generation time), but their combination. Does it risk oscillations?

There are other ways to do a DAA for a DAG but they require measuring DAG width. You could target solve time after a block's arrival, the ratio of propagation time to solvetime, or target a DAG width.

If you want to target solve time after a miner sees a block, the average parents per block (PPB) in the window might be a good-enough estimate of siblings per generation which you can use it to adjust the equation. Another way to get PPB might be to count generations by counting the number of median blue parent to median blue parents in the window so that PPB = N/count. Here's what I have to adjust for this and propagation delays (PD). PD can be estimated from an estimate of siblings per generation (see GhostDAG papers for the equation). Notice that PD and PPB can be artificially lowered (but not raised) which makes the difficulty harder. The only potential drawback to this is that a block-withholding attacker can make the public difficulty go high to discourage public hashrate before working on his private chain.

target = avgTarget * ((Max - Min - PD * N/PPB) * PPB / (T * N))

Another possibility instead of using PPB like the above is to let the window size refer to generations based on a count of "median blue parents" and use only the Min and Max timestamps from those (but still determine avgTarget from all reds and blues. This probably has the same effect as using PPB=N/count in the above.

If the DAG width is being controlled and set to a constant, then you can use "median" timestamp, target, and height of the block's parents as the input to BCH's new ASERT. It's the best DAA. It only requires looking at the parents. It looks like it could cause serious problems if the DAG width is not set to a constant. In ASERT's case, there is a reference block at some point in the past that the DAA uses as "initialization" of target and height. It knows how to set the difficulty by counting blocks after that reference height and knowing (for a given DAG width) how many blocks there should have been since then.

"Absolute" ASERT in normal POW is

T = block time 
ts = timespan between parent and reference block
dH = difference between parent and reference blocks' heights
=
next_target = reference_target * e^( ts/dH/T/N-1/N)

from difficulty-algorithms.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.