Git Product home page Git Product logo

Comments (5)

mbostock avatar mbostock commented on July 25, 2024 1

Strange. I don’t see a big difference between D3 and once is that benchmark.

(Also, it doesn’t seem like a practical benchmark to me: you can’t use a key in the data join to avoid redrawing rows that don’t change, because all rows change on every tick. Why would it ever be useful to look at a table where all rows are changing completely hundreds of times a second?)

I made three versions. One is a “normal” version:

http://bl.ocks.org/mbostock/c84a74960409b82b4293/2e9deaa41db94f6a6c949eb469f28aca3ca1e01c

If you assumed that there were always 50 rows in your table, you could eliminate the exit() and enter() blocks as a simple optimization, and just initialize the table statically with 50 rows. But maybe that’s cheating.

This version avoids closures for computing the text and class attributes, so it’s a little faster:

http://bl.ocks.org/mbostock/c84a74960409b82b4293/38e597c222e564137b11af2839a5834aeab450ab

This version further extracts the selectors (avoiding more closures), and uses memoization since in many cases (in this questionable benchmark) the new value is the same as the current value:

http://bl.ocks.org/mbostock/c84a74960409b82b4293/0c8b99c72e3ae8f0c3eed8b449faec7ce2140cdb

Of course, even the last version doesn’t eliminate all closures because some are created inside D3.

Anyway, so yes, you can make it faster by eliminating closures and using memoization. But I’m not totally convinced by this benchmark.

from d3-selection.

mbostock avatar mbostock commented on July 25, 2024

I appreciate your thoughtful suggestion, but I’m against this approach. D3 is intentionally low-level and tries to be direct. Caching the values previously set by D3 would make the D3 potentially inconsistent with the “ground truth” if the DOM changes outside of D3.

Even checking the current value in the DOM before deciding whether to set is probably not a good idea in general (because the cost of checking can erode the gain of not-setting, and also there’s some complexity around CSS’s cascading and computed style values). Philosophically, I’d rather have browser vendors improve the performance of DOM manipulation than have D3 try (and likely fail) to make it faster through caching.

Also, the data-join is intended to help with this. If you join against new data, you can use the enter, update and exit selections to perform just the needed manipulations for each set of elements. This means you don’t need to re-set fields that haven’t changed on updating elements—you only need to set them on enter (and possibly on exit if you’re using transitions).

You can also use selection.filter to avoid work if things haven’t changed, though obviously this performance gain isn’t automatic.

All that said, you’re welcome to write a D3 plugin and extend the selection prototype (see d3.selection for an example) to implement this if you think it would be useful!

from d3-selection.

pemrouz avatar pemrouz commented on July 25, 2024

Awesome, I'll definitely take a look at creating a plugin, perhaps after d3-selection and v4 is more settled down.

Philosophically, I agree this caching/virtual-dom-esque stuff should be improved at the browser level, but it's (a) surprisingly taxing today (old and new browsers) and (b) for the same reason, better to also look for possible areas of improvement at the next level up rather just entirely in user code.

Enter/update/exit/filter are architecturally super-useful for structuring components and having the option to selectively refine operations for performance reasons. But complimenting those component-specific optimisations, there seems to be some generic cases (classes, attr, text, html) which if tackled at this level, could benefit all users/components (without any downsides of becoming inconsistent with the DOM). Also, I don't think reading out those values is comparable to resetting them (maybe several orders of magnitude difference).

Anyways, look forward to seeing v4 being shipped and will let you know if I make any progress on experimenting with a d3-memoized-selection!

from d3-selection.

pemrouz avatar pemrouz commented on July 25, 2024

Small update: in attempting to maximise performance, there were much deeper problems than just the setters. I used DBMonster as a rough benchmark. D3 joins got ~10 - for comparison, this was similar to Angular 1 with noticeable lag. React was ~20. The D3 stamp/update pattern is theoretically much simpler than the vdom approaches though, so I knew it could do much better. I rewrote the join engine to collapse the call stack, and once hit ~40. Although the API is still the same, I don't think it would be possible to leverage the same tricks inside d3-selection or make it as a plugin. Instead, this could be used as an alternative core for users that need speed/terseness, which also plays nice with the rest of the d3 ecosystem/utils, (things which are often passed in to .data or setters, etc).

from d3-selection.

pemrouz avatar pemrouz commented on July 25, 2024

Sorry, I wasn't very clear. It's specifically D3 joins that's of interest, not D3 (or D3 vs once, since once was just a wrapper to reduce the join boilerplate). You can obviously render with D3 in many other ways, like precreating template HTML and cloning them.

The magic of joins though is that:

  1. they are entirely declarative, unlike jQuery or other adhoc techniques
  2. they are JS and introduce no new language like traditional templating (handlebars, jade, angular, etc)
  3. they do not require a compilation step like JSX
  4. they do not split your view logic over JS & HTML
  5. they are interoperable because there is no secondary DOM structure like a virtual DOM
  6. they are data-driven (higher power-to-weight ratio), rather than a brittle 1:1 mapping with HTML, like JSX and vdom

Hence why I think this is currently the best solution to generalise for structuring components and entire applications.

I agree that DBMonster is also not a typical use case. It is however the closest thing to a cross-framework benchmark, and useful to stress-test different approaches to find their upper limits. I'm not sure if you know of any other rendering challenges which would be more useful to investigate?

PS. I didn't realise cloneNode was that much faster than createElement! I'll see if I can somehow leverage that too under the hood by caching the first creation of a join..

from d3-selection.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.