Hi all,
My research group at Princeton has been collaborating with Mozilla on a project that involves precise monotonic cross-context timing. We've encountered a recurring browser- and platform-specific implementation issue, where a context's High Resolution Time monotonic clock does not tick during sleep: mdn/content#4713. Contributors to the standard previously discussed a version of this issue in Chrome and reached consensus that the clock should tick during sleep: #65.
I would like to suggest several related revisions to the High Resolution Time standard for the Web Performance Working Group's consideration.
First, the language that closed #65 (PR #69) does not expressly address what should happen during sleep.
<p class="note">In certain scenarios (e.g. when a tab is backgrounded), the
user agent may choose to throttle timers and periodic callbacks run in that
context or even freeze them entirely. Any such throttling should not affect
the resolution or accuracy of the time returned by the monotonic clock.</p>
Addressing the scenario would be valuable for implementers of the API, since operating systems typically offer separate monotonic clocks APIs that do and don't include sleep. Addressing the scenario would also be valuable for users of the API, who could better understand the expected behavior and document inconsistent behavior. Here's an example revision.
<p class="note">In certain scenarios (e.g., when a tab is backgrounded,
the thread or process for a tab is sleeping, or the system has entered a
suspended state), the user agent may choose to throttle timers and
periodic callbacks run in a context or even freeze them entirely. Any
such throttling or freezing should not affect the resolution or accuracy
of the time returned by the monotonic clock.</p>
Second, I would suggest upgrading the informative note on this issue to a normative requirement. If the monotonic clock in a context depends on past sleep for that context, then the following use case cannot be reliably supported in real-world conditions.
<p>This specification defines a few different capabilities: it provides
timestamps based on a stable, monotonic clock, comparable across
contexts, with potential sub-millisecond resolution.</p>
...
<p>Comparing timestamps between contexts is essential e.g. when
synchronizing work between a {{Worker}} and the main thread or when
instrumenting such work in order to create a unified view of the event
timeline.</p>
If a High Resolution Time timestamp can vary by a context's sleep history, then timestamps are not comparable across contexts in real-world scenarios (where thread, process, and system sleep are frequent and unpredictable). Similarly, the example provided in Section 1.2 (Example 2) is not reliable in real-world scenarios. A stronger version of this normative requirement would be to migrate the per-context monotonic clocks entirely to the shared monotonic clock (i.e., the same clock ticking in all contexts, just with a per-context time origin). That might not have been realistic with OS capabilities ~6 years ago, but it might be realistic now. Some relevant context that I found helpful: #22 and #29.
Third, I would suggest adding semantics for performance.timeOrigin + performance.now()
and how that compares to Date.now()
in the explanation of monotonic clocks (Section 6). There are normative requirements that DOMHighResTimeStamp
count in milliseconds, that the reference point for the shared monotonic clock is ordinary ECMAScript time, and that a context's time origin be set with the shared monotonic clock. The only way I see to satisfy these normative requirements is if performance.timeOrigin + performance.now()
and Date.now()
both represent the current time, and should only differ because of either 1) user or automated adjustments to the system clock after the shared monotonic clock starts ticking, or 2) differences in underlying clock implementation (e.g., there might be different hardware clocks with slightly different frequencies). There are definitely use cases for those performance.timeOrigin + performance.now()
semantics, where the goal is to timestamp events in a way that approximates real-world time, is monotonic, ideally is comparable across contexts (the prior item), and is not subject to user or operating system clock adjustments, assuming that the system clock was approximately correct when the shared monotonic timer started ticking. I recognize there have been prior discussions about how High Resolution Time relates to Date.now()
, with a resolution that the two APIs shouldn't have their values compared (e.g., #27). To be clear, I'm not suggesting a quantitative guarantee that the values provided by the two APIs will be similar, but rather, a semantic definition of how the two APIs relate and the specific conditions that can cause divergence over time.
Fourth, I would suggest defining what the term "skew" means in the document, and perhaps using a different term instead. The notion, if I understand correctly from discussion in prior issues, is that NTP and similar automatic adjustments for clock errors should not change (skew) the monotonic clocks. Unfortunately, skew also typically means a difference between two clocks (which NTP tries to mitigate). So... depending on the meaning of "skew," the normative requirement that monotonic clocks not have "system clock skew" either means they must not change when the system clock changes, or they must change when the system clock changes. I would also suggest clarifying that monotonic adjustments to a monotonic clock are permissible (e.g., monotonic NTP corrections for oscillator frequency or adjtime
corrections, see CLOCK_MONOTONIC
vs. CLOCK_MONOTONIC_RAW
in clock_gettime
); the current text could be interpreted to prohibit any adjustments.