Git Product home page Git Product logo

speedline's Introduction

speedline Build Status NPM speedline package

speedline screenshot

Background

The Navigation Timing API provides useful data that can be used to measure the performance of a website. Unfortunately this API has never been good at capturing the actual user experience.

The Speed Index, introduced by WebpageTest.org, aims to solve this issue. It measures how fast the page content is visually displayed. The current implementation is based on the Visual Progress from Video Capture calculation method described on the Speed Index page. The visual progress is calculated by comparing the distance between the histogram of the current frame and the final frame.

Speedline also calculates the perceptual speed index, based on the same principal as the original speed index, but it computes the visual progression between frames using the SSIM instead of the histogram distance.

Install the CLI

$ npm install -g speedline

Usage

Note: You should enable the screenshot options before recording the timeline.

$ speedline --help

  Usage
    $ speedline <timeline> [options]

  Options
    --pretty  Pretty print the output
    --fast    Skip parsing frames between similar ones
                Disclaimer: may result in different metrics due to skipped frames

  Examples
    $ speedline ./timeline.json

By default the CLI will output the same output as visual metrics. You can use the --pretty option if you want to have the histogram.

The speedline-core module

See readme of speedline-core.

License

MIT ยฉ Pierre-Marie Dartus

Dev

The repo is split into CLI and core. The core dependencies are duplicated in both package.json files. It is what it is.

To install:

yarn && yarn install-all

Releasing

Releasing both CLI and core:

yarn version # and bump appropriately
# update the version in core/package.json
git commit --amend --all # to amend into the tagged commit
npm publish
cd core && npm publish
git push

speedline's People

Contributors

brendankenny avatar dependabot[bot] avatar eligeske avatar jfsiii avatar patrickhulce avatar paulirish avatar pmdartus avatar samccone avatar samthor avatar sindresorhus avatar theneekz avatar utix avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

speedline's Issues

Ship a new release

We had some dependency bumps that will address a few vulnerability warnings

Consider separating CLI.

It'd significantly reduce dependencies for packages that do not use it (i.e. lighthouse). Since babar, load-rejection, and meow are CLI-specific, speedline would go from 47 total dependencies to 2.

heroku deployment failed

remote: Compressing source files... done.
remote: Building source:
remote:
remote: -----> Node.js app detected
remote:
remote: -----> Creating runtime environment
remote:
remote: NPM_CONFIG_LOGLEVEL=error
remote: NPM_CONFIG_PRODUCTION=true
remote: NODE_VERBOSE=false
remote: NODE_ENV=production
remote: NODE_MODULES_CACHE=true
remote:
remote: -----> Installing binaries
remote: engines.node (package.json): ~6.11.0
remote: engines.npm (package.json): ~5.0.3
remote:
remote: Resolving node version ~6.11.0 via semver.io...
remote: Downloading and installing node 6.11.1...
remote: Resolving npm version ~5.0.3 via semver.io...
remote: Downloading and installing npm 5.0.4 (replacing version 3.10.10)...
remote:
remote: -----> Restoring cache
remote: Skipping cache restore (new-signature)
remote:
remote: -----> Building dependencies
remote: Prebuild detected (node_modules already exists)
remote: Rebuilding any native modules
remote: npm ERR! path /tmp/build_9c402097d0e7567eccf5e236dc7f0959/node_modules/jade/bin/jade.js
remote: npm ERR! code ENOENT
remote: npm ERR! errno -2
remote: npm ERR! syscall chmod
remote: npm ERR! enoent ENOENT: no such file or directory, chmod '/tmp/build_9c402097d0e7567eccf5e236dc7f0959/node_modules/jade/bin/jade.js'
remote: npm ERR! enoent This is related to npm not being able to find a file.
remote: npm ERR! enoent
remote:
remote: npm ERR! A complete log of this run can be found in:
remote: npm ERR! /app/.npm/_logs/2017-07-16T02_45_14_132Z-debug.log
remote:
remote: -----> Build failed
remote:
remote: We're sorry this build is failing! You can troubleshoot common issues here:
remote: https://devcenter.heroku.com/articles/troubleshooting-node-deploys
remote:
remote: Some possible problems:
remote:
remote: - node_modules checked into source control
remote: https://blog.heroku.com/node-habits-2016#9-only-git-the-important-bits
remote:
remote: Love,
remote: Heroku
remote:
remote: ! Push rejected, failed to compile Node.js app.
remote:
remote: ! Push failed
remote: Verifying deploy...
remote:
remote: ! Push rejected to pacific-harbor-25184.
remote:
To https://git.heroku.com/pacific-harbor-25184.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'https://git.heroku.com/pacific-harbor-25184.git'

So I do not know why its failing to deploy

undefined:62 Syntax error: Unexpected token ...

When I run this tool I get this error:

speedline data.json
undefined:62
  this.children = new Map([...this.children.entries()].sort(sortingFunction))
                           ^^^
SyntaxError: Unexpected token ...
    at requireval (evalmachine.<anonymous>:13:39)
    at evalmachine.<anonymous>:50:1
    at new ModelAPI (/usr/local/lib/node_modules/speedline/node_modules/devtools-timeline-model/index.js:19:25)
    at Object.extractFramesFromTimeline (/usr/local/lib/node_modules/speedline/lib/frame.js:71:14)
    at module.exports (/usr/local/lib/node_modules/speedline/lib/index.js:42:25)
    at Object.<anonymous> (/usr/local/lib/node_modules/speedline/cli.js:73:1)
    at Module._compile (module.js:409:26)
    at Object.Module._extensions..js (module.js:416:10)
    at Module.load (module.js:343:32)
    at Function.Module._load (module.js:300:12)
$ node --version
v4.3.2

Improve SSIM implementation

I was interested in how the SSIM we consume (image-ssim) is different than that of the reference implementation.

visualmetrics uses https://github.com/jterrace/pyssim which points to https://ece.uwaterloo.ca/~z70wang/research/ssim/ .. the reference table there with einstein has the last SSIM at 0.662
However our module reports (via http://darosh.github.io/image-ssim-js/test/browser_test.html) this pair as 0.741.
Also the implementation looks fairly different than most of the other SSIM modules I'm looking at.

There are two new modules for SSIM, I'm seeing: https://github.com/obartra/ssim and https://github.com/IonicaBizau/img-ssim

However, both of them are unattractive in their current state.

Regardless, I'd like to still keep this issue open to explore changing our SSIM dependency.

GraphicsMagick or ImageMagick?

The readme says IM must be installed, but the module depends on the GM module.

I'm just confused on this. Is there a fallback layer in between?

SpeedIndex reported as 0 on sites with <=3 frames

We're seeing some cases where speedindex is reporting 0. Typically happens with sites like airhorner.com and example.com which are very simple and may have only a few frames..

We've seen these sorts of results:

{
  'frames': [...],
  'first': 352,
  'complete': 352,
  'duration': 3417,
  'speedIndex': 352
}
{
  'frames': [...],
  'first': 310,
  'complete': 310,
  'duration': 3366,
  'speedIndex': 0
}

cc @brendankenny

Faster histogram creation

If I get a chance I'll take a crack at it but browsing through the code the image pixel processing jumped out at me.

The histogram extraction logic runs through the image top-to-bottom and then across (and repeats for each color channel).

The images are stored in RAM on a row-by-row basis with each pixel color-packed as sequential bytes. As it stands now the CPU caches will be thrashing on the image data. Flipping the for loops to have the height on the outside, width in the middle and color channel on the inside will be much friendlier to the CPU caches and should be a lot faster.

Smarter de-duplication of frames

We currently do some binary search on progress in "fast" mode, but we could do better by also de-duping identical frames in all cases, not just fast.

first trace event can be far off from navStart

discovered in GoogleChrome/lighthouse#2095

Traces can sometimes contain events that are far before navStart, even in more regular cases I've seen screenshots from the previous page linger in a trace of the new page that throws off the computation.

Two proposals with varying degrees of urgency:

  1. filter out screenshots before the user-provided timeOrigin opt (PR: #2114)
  2. consider defaulting startTs to the last occurrence of navigationStart for the root frame instead of the first event of the trace (#38)

White frame perceptual progress can be significantly greater than 0

The problem now that we're using real SSIM is that getPerceptualProgress normalizes the value relative to the global min instead of the SSIM of the white frame with the target frame. This creates some very unintuitive histograms based on the fact that a page with a white background can be quite structurally similar to a white frame.

A recent trace of CNN highlights this issue. Their home page loads content with header, image, and text before an ad pops in above that pushes all the content down. The frame before the ad has a structural similarity with the target of ~18% while empty white has a structural similarity of ~30% which results in the following histogram.
image

This artificially deflates the perceptual speed index for credit before first paint, and does not appropriately punish the site for progressively rendering toward a target that is abruptly changed. The histogram should instead look like the following if normalized to the first frame:
image

Before Ad After Ad
image image

Indexes view layout instability as positive progress

see #49 (comment)

Classical speed index certainly suffers from this much more so than PSI, but PSI also fails in similar situations to identify disruptive layout shifts as negative events rather than positive ones (which is why I say it rewards the jank, even if the PSI metric will be inflated because of it). Here's an example gist of a page that has multiple elements coming in above the primary content to simulate an ad popping in over an article. Because the target is the ultimately disrupted page, any progress toward the target will be positive in the eyes of PSI and the graph over time of SSIM is indistinguishable from a standard progressively enhanced load. You can also see this play out in real world cases like theverge.com when a video ad is injected after the header but before the content. While PSI is appropriately inflated due to the later structural changes, there's still no signal that some of the progress was disruptive to the user.

My primary point is that if you're looking for a signal of layout stability and visual churn throughout the page lifecycle separate from load time, other signals are required beyond speed index today. I've been playing around with something that examines lost edges and frame-to-frame similarity rather than progress toward a target, but if anyone has insights or work here I'd love to see them :)

GIF of Load
timeline

Speedline
image

fix tsc types

Back in #69 @brendankenny added some typechecking stuff

but it looks like its atrophied and there are now 25 errors.

it was never part of the test script or any ci stuff. shrug.

Baseline frame for comparison

The calculations of SpeedIndex in WPT are based on video frames, which are captured at 30-60fps during load. However, we're using frames which are only captured when the browser ships 'em. In practice, this nets out one particular key difference:

There's no guarantee we'll have white/blank frames at the start of our trace.

As a result, we can use a contentful frame as our baseline, which leads to an incorrect representation of 0% visual progress. (I've profiled a number of sites whose first frame shipped is already rather complete)

It feels like the correct solution here is to generate a white frame at the timeOrigin to use as our first/baseline frame. +@pahammad is this the right approach?

Somewhat related to #21

Use SSIM as progress without normalization

A followup to #48, we should follow @pahammad's advice and do what visualmetrics does, using SSIM as the progress once the first paint has occurred and do not normalize.

The normalization causes problems on sites like www.chromestatus.com where a white page has a SSIM of 60 (!!!) with the target. This greatly exaggerates the small differences in SSIM later in the page load since the remaining 40% is remapped to 100%.

PSI is ~2600 under the flawed approach when its realistically more like ~1600.

Set time origin to navigationStart

Right now speedline considers time 0 to be the beginning of the trace. However many traces do not start the page load exactly at 0ms.

Here's the progressive-app.json that we're using as a fixture. navStart here is at 600ms which is a lot later.. :

image

Whoops!

Fixing this will change everyone's numbers a bit, but IMO, they'll be much more precise.

I'm proposing we do two things:

  1. speedline('./timeline.json', { timeOrigin: dateObj }).then(... -- We add expose an option for the user to pass in a timeOrigin. This would become our new 0, and we'd also crop off any screenshots before this point.
  2. navStart detection fallback -- We find the navStart from the trace. Multiple frames and tabs make this harder than you'd think, but I have work for this in Lighthouse that we can use.

Does this make sense? Any alternatives?

Histogram shown twice

I tried this tool, and it is great, thanks!

But it showed the histogram twice in my shell, I don't know why:

image

SpeedLine vs the first paint API

Hi @paulirish do you know how accurate the trace log is? I've been trying out using SpeedLine to get visual metrics from our Android devices to get rid of the overhead using a video. It works great but (there's always a but) the first visual change differs a lot for first paint reported from the paint timing API. In this example the first change reported from Speed Line is 778ms and the paint timing API report 1.27s.

Manually looking at the trace log in devtools, it looks like Speed Line is correct (I added the orange frame to just see when we start the navigation):

Screen Shot 2020-04-14 at 3 36 44 PM

Is this something you have seen in your tests?

Can't get it to work on Windows (Cygwin, Babun, Fish shell)

Hi!

I'd love to run speedline on my CLI on Windows (not PowerShell). I'm running Cygwin via Babun and Fish Shell, though the problem persists on other Cygwin-based shells on my system, too (MSYS2, Bash, ZSH).

Sadly, the process fails to run under Cygwin/WIN currently. Here's the error message for the default command:

capture1

And there's another error when trying to run with the -pretty parameter. I don't know if those 2 errors are related. I've tried both relative & absolute paths to the timeline JSON.

capture2

Am I missing dependencies or is a lib under WIN behaving differently from OSX?

Thank you for your help!

Vulnerability issues in dependant package [email protected]

Hi,
The current version is using [email protected] which at the same time is dependant on [email protected] is having a High vulnerability issue. It would be nice to bump a new release using [email protected] to fix that vulnerability.

[email protected] using trim-newlines@^4.0.2
https://github.com/sindresorhus/meow/blob/main/package.json#L54

High Regular Expression Denial of Service in trim-newlines
Package trim-newlines
Patched in >=3.0.1
Dependency of @wdio/devtools-service [dev]
Path @wdio/devtools-service > speedline > meow > trim-newlines

More info GHSA-7p7h-4mm5-852v

Thanks,
Guido.

npm ERR! enoent ENOENT: no such file or directory, chmod ../speedline/cli.js

Hey,
While installing speedline as global package

npm i -g speedline

I get errors:

npm ERR! Darwin 15.5.0
npm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "install" "-g" "speedline"
npm ERR! node v5.11.0
npm ERR! npm  v3.9.6
npm ERR! path /Users/hinok/.node/lib/node_modules/speedline/cli.js
npm ERR! code ENOENT
npm ERR! errno -2
npm ERR! syscall chmod

npm ERR! enoent ENOENT: no such file or directory, chmod '/Users/hinok/.node/lib/node_modules/speedline/cli.js'
npm ERR! enoent ENOENT: no such file or directory, chmod '/Users/hinok/.node/lib/node_modules/speedline/cli.js'
npm ERR! enoent This is most likely not a problem with npm itself
npm ERR! enoent and is related to npm not being able to find a file.
npm ERR! enoent

npm ERR! Please include the following file with any support request:
npm ERR!     /Users/hinok/npm-debug.log

My env

OS X 10.11.5
Node 5.11.0
npm 3.9.6

I also use prefix in .npmrc to install all modules in my home directory

prefix=/Users/hinok/.node

I've found also that this one works but It doesn't put binary in PATH to execute (quite obvious).

npm i --no-bin-links -g speedline

Is it problem on my side or is it something wrong with package?

This problem doesn't occur when I install older version 0.1.3

npm install -g [email protected]
/Users/hinok/.node/bin/speedline -> /Users/hinok/.node/lib/node_modules/speedline/cli.js
/Users/hinok/.node/lib
โ””โ”€โ”ฌ [email protected]
  โ”œโ”€โ”ฌ [email protected]
  โ”‚ โ””โ”€โ”€ [email protected]
  โ”œโ”€โ”ฌ [email protected]
  โ”‚ โ”œโ”€โ”€ [email protected]
  โ”‚ โ””โ”€โ”€ [email protected]
  โ”œโ”€โ”€ [email protected]
  โ”œโ”€โ”ฌ [email protected]
  โ”‚ โ”œโ”€โ”ฌ [email protected]
  โ”‚ โ”‚ โ””โ”€โ”€ [email protected]
  โ”‚ โ””โ”€โ”€ [email protected]
  โ””โ”€โ”ฌ [email protected]
    โ”œโ”€โ”ฌ [email protected]
    โ”‚ โ””โ”€โ”€ [email protected]
    โ”œโ”€โ”€ [email protected]
    โ”œโ”€โ”€ [email protected]
    โ”œโ”€โ”€ [email protected]
    โ”œโ”€โ”ฌ [email protected]
    โ”‚ โ”œโ”€โ”€ [email protected]
    โ”‚ โ”œโ”€โ”ฌ [email protected]
    โ”‚ โ”‚ โ””โ”€โ”€ [email protected]
    โ”‚ โ”œโ”€โ”€ [email protected]
    โ”‚ โ””โ”€โ”ฌ [email protected]
    โ”‚   โ”œโ”€โ”ฌ [email protected]
    โ”‚   โ”‚ โ””โ”€โ”€ [email protected]
    โ”‚   โ””โ”€โ”ฌ [email protected]
    โ”‚     โ””โ”€โ”€ [email protected]
    โ”œโ”€โ”€ [email protected]
    โ”œโ”€โ”ฌ [email protected]
    โ”‚ โ”œโ”€โ”ฌ [email protected]
    โ”‚ โ”‚ โ”œโ”€โ”€ [email protected]
    โ”‚ โ”‚ โ””โ”€โ”ฌ [email protected]
    โ”‚ โ”‚   โ””โ”€โ”€ [email protected]
    โ”‚ โ””โ”€โ”ฌ [email protected]
    โ”‚   โ”œโ”€โ”ฌ [email protected]
    โ”‚   โ”‚ โ”œโ”€โ”€ [email protected]
    โ”‚   โ”‚ โ”œโ”€โ”ฌ [email protected]
    โ”‚   โ”‚ โ”‚ โ””โ”€โ”ฌ [email protected]
    โ”‚   โ”‚ โ”‚   โ””โ”€โ”€ [email protected]
    โ”‚   โ”‚ โ”œโ”€โ”€ [email protected]
    โ”‚   โ”‚ โ””โ”€โ”ฌ [email protected]
    โ”‚   โ”‚   โ””โ”€โ”€ [email protected]
    โ”‚   โ””โ”€โ”€ [email protected]
    โ”œโ”€โ”ฌ [email protected]
    โ”‚ โ”œโ”€โ”ฌ [email protected]
    โ”‚ โ”‚ โ””โ”€โ”ฌ [email protected]
    โ”‚ โ”‚   โ””โ”€โ”ฌ [email protected]
    โ”‚ โ”‚     โ””โ”€โ”€ [email protected]
    โ”‚ โ””โ”€โ”ฌ [email protected]
    โ”‚   โ””โ”€โ”€ [email protected]
    โ””โ”€โ”€ [email protected]

loudRejection/api is deprecated

(node:18269) DeprecationWarning: loudRejection/api is deprecated. Use the currently-unhandled module instead.

Running speedline today spams this error in the console. at some point loud-rejection bumped and a non-lockfile install picked it up

(PR incoming...)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.