dnlup / doc Goto Github PK
View Code? Open in Web Editor NEWGet usage and health data about your Node.js process.
License: ISC License
Get usage and health data about your Node.js process.
License: ISC License
Consider adding garbage collection metrics.
Work on checks in tests to decrease flakiness.
It is time to start testing/adding esm.
The current (not published yet) implementation causes a lot of inline caches to get megamorphic. Let's try to avoid that, if possible.
Keeping this field is starting to be cumbersome. Once the set of options for the event loop delay gives the user enough flexibility I think it should be removed.
Like _getActiveHandles
, process
also has _getActiveRequests
. It might be a useful addition.
Consider letting the user calling sample
manually.
Sometimes timer-based tests fail. Maybe using a fake timer would solve this.
Benchmarks result could be automatically parsed and validated using JSON.
Now, they still have to be manually inspected.
I think it might be useful to track the total time and pauses taken by the gc. Right now the user has to do this computation.
As an example here's how datadog exposes it:
Let's try to see if we can reuse Node types in the declarations.
Use Node diagnostics channels to gather tcp socket count.
I must have done something wrong during a rebase. From version 2.0.2
to version 3.0.1
, I found out that I am carrying duplicate commits. I have fixed that on the branch next
, which is now the repo's default branch, but I cannot fix it in the already published tags. That would require unpublishing those versions from npm
, rewriting the changelog, and making new tags. A lot of things could go wrong in the process. I'll do better next time. I should have known better. I sincerely apologize.
/cc @acheronfail
I'll keep this open for a bit as a reminder.
Consider adding an entry point that can be used with --require
As cited in the documentation the observer should be disconnected once the callback is called for perf reason.
I am opening this to remember to use the native Histogram when it will be supported on all Node LTS versions. As of now it's supported only on 16
(see here).
Using this https://gist.github.com/treecy/437791c02e9edfa2e1006f1da9d34e10
check if our listing process has anything that can improved/fixed.
Add process active handles. Since this uses an undocumented private API of the process, it should be optional, and the docs should report this.
As an example:
It might be useful to have a streaming api also.
This might introduce a breaking change because I am already using the data
event.
As titled. To avoid clashes if eventLoopUtilization will define its own options.
Following the suggestion made by @acheronfail in #103 (comment), let's try to refactor how the default export is handled to allow TS to catch errors at compile time and not at runtime as it is now with the current definitions.
To give a better picture of the GC work happening in the process, I think it should be included in the metric the count
for each PerformanceEntry
kind and, potentially, flag.
A new API has popped out on Node 12:
https://nodejs.org/dist/latest-v12.x/docs/api/process.html#process_process_resourceusage
It might be useful in this module.
Due to a limitation in standard
types are ignored when linting.
Ref:
Node diagnostic channels are useful to instrument code and I think it would be useful to support them.
Unfortunately, tests are getting flaky again. This needs to be looked into.
Sometimes it's just a failed npm install
and there's nothing I can do about it, but most of the time it's a failed check on a value, which means that this approach is not super reliable in a CI environment.
Public a simple website to document this module.
In Node 14 there's this new API that should be available here:
https://nodejs.org/api/perf_hooks.html#perf_hooks_performance_eventlooputilization_util1_util2
Tests are flaky right now.
The only platform where they run consistently is Linux. On one side, I would prefer to don't limit the checks on metrics values on just being greater than zero; on the other, I should find a good balance to avoid these random errors.
As the title says, I think doc
should expose helpers to allow the user to check if a metric is supported.
Dependabot encountered the following error when parsing your .dependabot/config.yml
:
The property '#/version' value 2 did not match one of the following values: 1
The property '#/update_configs/0/package_manager' value "npm" did not match one of the following values: javascript, ruby:bundler, php:composer, java:maven, elixir:hex, rust:cargo, java:gradle, dotnet:nuget, go:dep, go:modules, docker, elm, submodules, github_actions, python, terraform
Please update the config file to conform with Dependabot's specification using our docs and online validator.
While it is nice to have a JSON schema for defining the configuration options, the generated validate function is not small. Consider changing this approach.
Add benchmarks to keep the overhead in check. The manual tests I have done so far are using a barebone http
server to see if its req/sec throughput is affected, which is not so far. It will be useful for the future as we add new metrics to be collected.
Because a Sampler instance can be stopped, it would be nice if all the metrics supported starting and stopping/destroying on demand without creating a new Sampler.
We should fix this deprecation notice:
(node:53274) [DEP0152] DeprecationWarning: Custom PerformanceEntry accessors are deprecated. Please use the detail property.
Node API as a reference:
This also opens up a question: should this module be worker_thread aware? Meaning, if it's running in a worker thread, should it forward metrics to the main thread?
There should be options that let the user select which metrics the package should collect.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.