Git Product home page Git Product logo

Comments (5)

zawy12 avatar zawy12 commented on August 19, 2024

Zcash performance.

Zcash (and Hush that follows) uses Digi v3 which is:
next_D=avg( past 17 D) * T / (0.75*T + 0.25*avg(past 17 ST delayed 5 blocks) )
ST=solvetime, T=target time. The delay is to use MTP to prevent out of sequence timestamps. There are POW limits on the denominator that are rarely activated and actually greatly hurt the results if they are reached.

NOTE: the scale on the following charts has been change from the above post. The following divides the avg 11 solvetimes and "hash attacks" by 4. A hash attack with 2x the baseline is at 0.5, and an avg 11 ST at 2 is at 0.5. These are the minimum values that trigger a count and the display spikes.

I had to write a special script to handle timestamps when trying to average Zcash's solvetimes because Zcash has had a lot of out-of-sequence timestamps. A complicating factor is that after a past-time was assigned, often a timestamp just 1 second after it was assigned. Sometimes the "+1" was assigned twice in a row, so my script got more complicated. I didn't code for 3 times in a row which you can see below in every large >1.5 hash attack peak below. Those were not increases in hash rate, but three +1 values assigned in a row. See data below chart. So Zcash's "hahs attack" values are even better than shown, indicating it did not attract hash attacks from accidental variation, and responded fast enough to price changes.
zcash1

The above is the first 60,000 blocks. The following is a more recent 60,000, starting at block 170,000. Notice the hash attack spikes are not present which means the timestamp problems mostly stopped (or were prevented) and the "blocks stolen" is more accurate. also notice the avg solvetime is 1.004, having 0.4% error. This is the same as what I saw in experiments, and higher than the 0.2% Zcash has claimed. They started with the very first blocks that came really fast. From block 10,000 to 230,000 the average solvetime was 0.48% too high. I mention it to demonstrate experiments are accurate.

zcash2_170k-230k

Here is an examples of a lot of bad timestamps assigned. For some reason, they assign the oldest-possible timestamp as can be seen by large negatives, then usually a large positive "solvetime" follows which is a correct timestamp. The correct timestamp minus the incorrect old timestamp appears to be a large solvetime. The first large negative solvetime results from the timestamp being set equal to the minimum allowed which is the median time past (MTP, the median of the past 11 timestamps). The "1's" instead of large negatives result from 2 or more timestamps in a row being set to MTP, when the MTP did not change (a +1 seems to be inserted by the code). The 1's occur only if MTP did not change which is possible because of previous out-of-sequence timestamps.

Long story short
It appears about 30% of the mining power was assigning bad timestamps, and were a constant source of hashing, and the bad timestamps did not help them in any way

Long story
It appears clear that the first bad timestamp is almost always negative which does not help a miner. In some algos it can drive difficulty up a little. In others, there is a way it could drive difficulty to zero, but Zcash does not have that code error. Also, the -670 to 0 sequence of 5 blocks are timestamps all at the MTP, and they appear to have the wrong timestamp, as evidenced by the 1916 that follows which appears to be a correct timestamp, as evidenced by the 9 reasonable solvetimes that follow it. This is strange because the hash rate did not appear to increased. If anything, the hash rate was less than average when the timestamps were bad. If hash rate did not increase, then this big miner or pool or group of miners or pool are usually there mining. Judging by the frequency of negatives, the miners were about 30% of the hashrate. The long sequence of 1's occurred from 30% because this happened only about once per 3000 blocks. You can take any pair of timestamps that appear to be correct and are some number of blocks apart and subtract them and divide by the number of blocks apart to get an estimate of the solvetime during this and other unusual periods, and the average solvetime is about correct.

The right column is the assigned timestamp for that block minus the MTP of the previous 11 timestamps. When it is "1" the timestamps was the oldest that wold have been allowed.

image

from difficulty-algorithms.

zawy12 avatar zawy12 commented on August 19, 2024

Hush performance.

Hush has the same POW (Equihash) and difficulty algorithm as Zcash, but is about 1% the hash power, so it's good to see if the performance metrics are much worse. You can see its performance seems a little worse, but that it had to deal with huge swings in hash rate in the beginning. Here are the first 60,000 blocks.

hush1

Here are the most recent 60,000 blocks. I do not know why but it has been a lot worse the past 100 days (57,600 blocks) when the hash rate is a lot more stable. And for some reason the past week has been a lot better (last half of last chart).

hush_170k-223k

from difficulty-algorithms.

zawy12 avatar zawy12 commented on August 19, 2024

Masari performance.

Masari started using the WHM algorithm on this page and is doing awesome. This shows their history of problems and shows the new algos results.

Masari first had Monero's default difficulty which is like an SMA with N=730, then it switched to Sumokoin's pseudo-SMA with N=17, and recently it switched to WHM N=60. The N=17 aglorithm that Masari and Sumokoin use is
next_D=avg(17 D) / (0.8*avg(17 ST) + 0.3*median(17 ST)
ST=solvetimes. The 17 D and ST are 6 blocks behind the most recent block due to using MTP to prevent timestamp manipulation. The 0.3 instead of 0.2 is because the median is ln(2)=0.693 of the mean.

I should mention the 3 coins here using N=17 is my fault. N=30 if not N=60 would have been a lot better for all of them. But from this past mistake, I have a better estimate on how to select N.

masari1

Next image starts where the above image left off. The WHM N=60 (second one of the 3 on that page) was employed at 63,000.

masari_60k-74k

from difficulty-algorithms.

zawy12 avatar zawy12 commented on August 19, 2024

Sumokoin performance.

Masari above got it's algorithm from Sumokoin, but Sumokoin has T=240 as opposed to Masari's T=120. Possibly this was why Masari had a lot more trouble with the same algorithm. When you go to a lower T, you need to raise N, and vice versa.

Like Masari, Sumokoin also started with a high N value. I'm not sure it was the same as Masari or not (Monero default). N=17 worked a lot better for them than Masari, but you can see it did not do good on the metrics.

sumokoin1

sumokoin2

from difficulty-algorithms.

zawy12 avatar zawy12 commented on August 19, 2024

Karbowanec performance.

Like Sumokoin, it started with Monero or Cryptonote default (N=300) and was forced to fork. They chose N=17 on my recommendation and have been happy with it, but it does not appear near as good as it could have been. My selection of N was too small. The solvetimes being too high are the result of low N SMA naturally causing this, not because of a deeper problem. It just needed a 0.96 adjustment factor.

Since I didn't adjust in these charts for the high avg ST the "blocks stolen" metric is too low. This also applies to Sumokoin and Masari above, but not to the head-to-head comparisons at the very top that have the adjustment.

There are 3 images of 7 charts each, covering 60,000 blocks each. This covers block 0 to 180,000.

karb1
karb2
karb3

from difficulty-algorithms.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.