Git Product home page Git Product logo

fast.js's People

Contributors

bttmly avatar faustbrian avatar gyeates avatar joseluisq avatar jviereck avatar kkirsche avatar leonfedotov avatar megawac avatar numminorihsf avatar phpnode avatar richayotte avatar samsonradu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fast.js's Issues

fast.assign has no own property check

If this is the desired behavior then I think it deserves a quick mention in the comments associated with that method since Object.assign copies own enumerable keys. Otherwise, I've always wondered which is faster:

for (key in obj) {
  if (obj.hasOwnProperty(key)) {
  // code
  }
}

// or...

var keys = Object.keys(obj);
var len = keys.length;
for (var i = 0; i < len; i++) {
  // code
} 

Guess I'll benchmark it

jspm not supported on benchmark target.

I'm trying to run the JSPerf benchmark on an embedded system and jspm is failing so that I'm unable to run the tests. Can you include a compiled version of fast.js in the repo so people can test without having to point to a local copy of the lib?

Tracuer Plugin

I am not sure I titled this correctly but the idea is that it would be cool if this library could be used in a build step - gulp, grunt, others - to optimize code written natively.

I think the benchmark is incorrect

Hi, I'm looking at the benchmark results in your README and it says:

fast.try() vs try {} catch (e) {}
    ✓  try...catch x 111,674 ops/sec ±1.75% (87 runs sampled)
    ✓  fast.try() x 3,645,964 ops/sec ±1.47% (81 runs sampled)

    Winner is: fast.try() (3164.83% faster)

So I think "WTF? How did they implement try/catch in JS and how it could possibly be so fast?"... So I compared the sources:

exports['try'] = function fastTry (fn) {
  try {
    return fn();
  }
  catch (e) {
    if (!(e instanceof Error)) {
      return new Error(e);
    }
    else {
      return e;
    }
  }
};

So, yeah. It is just a callback wrapped in try, of course. Then I check the benchmark.

exports['try...catch'] = function () {
  try {
    var d = 0;
    factorial(10);
    factorial(2);
    return d;
  }
  catch (e) {
    return e;
  }
}

...and it is the same thing. So how such a big difference is possible?

  1. I'm not sure the benchmarking code is reliable enough. I can't say for sure because I haven't inspected the Benchmark.Suite() stuff but it looks like it has some async stuff in it and since we are measuring time I would avoid this. IMO it would be better just to for loop 1000 times over the different functions and measure Date.now() before and after.
  2. I think you should do a dry run of the whole benchmark first, without measuring anything, to make sure the JIT and all kinds of optimisations are warmed up, then you should run the benchmark again and measure. Otherwise it is possible, for example, that exports['try'] gets JITed because exports['try...catch'] is run before it and the latter was not JITed but caused the former to be JITed for some reason. Also I think you should run each benchmark (and each sub-test of each benchmark) in this way in separate process to make sure different benchmark scenarios doesn't interfere with each other. In other words the benchmark for fast.js of try should run in separate process and this is all that runs here (twice in the way I described). Then the "bultin" try in separate process and this is all. Then the same for underscore and etc. and all benchmarks.
  3. It is possible that exports['try...catch'] and exports['try'] are optimized in different way because for example one has 1 line inside the try block and one has 5 lines inside the try block, which is still good enough as long as it works. But I don't like it. Just like in V8 using Array.push() 1000 times is faster than using new Array( 1000 ); - they have some specific optimisation of the push function so it forces you to write .push() to yield faster code, but it was the opposite in earlier versions of V8. Same thing here with fast.js where we are calling two wrappers and they are faster than calling the code itself without wrappers. Sure, we should use it if it works in practice by I don't like writing code that is compiler specific because it is not reliable between compilers and even different versions of the same compiler.

But hey, think about it - "3164.83% faster" for the same code. First it is misleading advertising, second you are hitting some kind of optimizer edge case - there is no doubt about it. No problem with the later but nevertheless, don't put misleading advertisement, because it could become slower in the next V8 release.

`while` loops

I'm just finishing up the testing code, and the places places lodash wins are on the 1000 array item benchmarks.

I believe this is because lodash uses while loops, which perform slightly better on large arrays. Is there a reason to be using for loops I'm not seeing? If not, I'll do a new PR with while loops.

Faster lastIndexOf

I was able to make this a bit faster by removing the variable definition for length (also should use less memory). (Probably this trick can be used in a few more places too)

function fastLastIndexOf (subject, target) {
  var length = subject.length,
      i;
  for (i = length - 1; i >= 0; i--) {
    if (subject[i] === target) {
      return i;
    }
  }
  return -1;
};

vs

function myLastIndexOf (subject, target) {
  for (var i = subject.length - 1; i >= 0; i--) {
    if (subject[i] === target) {
      return i;
    }
  }
  return -1;
};

Publish to NPM

Great improvements today, can you give me a heads up when the new stuff goes up on NPM? Thanks!

indexOf() not working

When using indexOf ît does not work properly...

var keyword = "du kannst zum teufel gehen";
var search = "zum teufel gehen";

console.log(fast.indexOf(string,keyword)); /// -1

Typed Arrays

Typed arrays have huge memory and cpu advantages over JavaScript Arrays.

Have you considered typed array usage? A project I'm working favors typed arrays over JavaScript arrays. Both for cpu speed and for memory efficiency.

And an extreme case, rather than using an Array of Objects, we use a single Object of Typed Arrays, where each variable in the object corresponds to a Typed Array. So foo[i].var === foo.var[i]. In a sense, it is struct-like.

Typings?

Would you perhaps be open to the idea of having typings using the likes of TypeScript? It may seem as to clash a bit with the fast.js name, but to JS end-users the results would be the same anyway. :P

fast.js forEach underpeforms native forEach in SpiderMonkey

Based on http://jsperf.com/fast-vs-lodash I'm seeing numbers like so for forEach:

Firefox:

"fast": 47,688 ops/s
native: 123,187 ops/s

Chrome:

"fast": 71,070 ops/s
native: 20,112 ops/s

The "fast" versions are both faster than V8's builtin, but both slower than SpiderMonkey's builtin.

I see similar results for most of the fast.js functions, except indexOf/lastIndexOf, where it does better that both builtins.

partial support for constructors

Partial application is great and can be applied to constructors as well. For example:

function MyWidget(options) { /*..*/ }
MyWidget.prototype.get = function() { /*..*/ };

var MyWidgetWithCoolOpts = fast.partial(MyWidget, {/*some options*/});

var widget = new MyWidgetWithCoolOpts();
widget instanceof MyWidget // true
typeof widget.get // function

It could if it did an instanceof check:

exports.partial = function fastPartial (fn) {
  var boundLength = arguments.length - 1,
      boundArgs;

  boundArgs = new Array(boundLength);
  for (var i = 0; i < boundLength; i++) {
    boundArgs[i] = arguments[i + 1];
  }
  return function partialed() {
    var length = arguments.length,
        args = new Array(boundLength + length),
        i;
    for (i = 0; i < boundLength; i++) {
      args[i] = boundArgs[i];
    }
    for (i = 0; i < length; i++) {
      args[boundLength + i] = arguments[i];
    }
    /** new part **/
    if (this instanceof partialed) {
      var thisBinding = Object.create(func.prototype),
          result = fn.apply(thisBinding, args);

      return (Object(result) === result) ? result : thisBinding;
    }
    /** end */
    return fn.apply(this, args);
  };
};

Related to #13

Relate speedup numbers to default implementation?

The current benchmark output looks like this:

  Native .lastIndexOf() vs fast.lastIndexOf() (10 items)
    ✓  Array::lastIndexOf() x 17,124,729 ops/sec ±1.79% (92 runs sampled)
    ✓  fast.lastIndexOf() x 29,032,323 ops/sec ±1.78% (87 runs sampled)
    ✓  underscore.lastIndexOf() x 12,149,850 ops/sec ±1.82% (92 runs sampled)
    ✓  lodash.lastIndexOf() x 21,171,936 ops/sec ±1.74% (90 runs sampled)

    Winner is: fast.lastIndexOf() (138.95% faster)

The relation for the 138.95% faster is made up from the slowest candidate (here underscore.lastIndexOf()) compared to the fastest one (in this case fast.lastIndexOf()). Wouldn't it be more relevant to compare the speedup against the build in Array::lastIndexOf all the time, as the goal of fast.js is to compete with the build in functions?

Let me know what you think. I am happy to do the changes and provide a PR.

Use semantic versioning (start with v1) and maintain a changelog

If your software is being used in production, it should probably already be 1.0.0.
(https://semver.org/spec/v2.0.0.html)

Fast.js seems to already be intended for production and has over 3,000 stars.

But it has no changelog!! 😱

If you feel like you're too busy to write a changelog, consider using semantic-release, which not only automates releases but generates a changelog in your GitHub releases from your (carefully written) commit messages.

Please go ahead and release version 1.0.0 so that

  • npm can pull in the latest bug fixes when installing ^1.0.0 etc.
  • you will be motivated to write a changelog
  • users will be able to find breaking change information quickly

Docs

As per #78, adding documentation might be helpful. Unfortunately, this will probably kind of a pain in the butt since most methods adhere to the same signature as underscore/lodash.

Alternately, instead of full-fledged docs, the Readme could just point out some of the particular/unique decisions, such as the use of instanceof Array to choose between array iterators and object iterators.

Faster fastConcat()

fastConcat() can probably be improved, because from what I've seen the push() function is usually slower than the vanilla code: array[i] = value;

Object.values is faster than fast.values

The new function Object.values is x2 faster than fast.values on v8.

    ✓  Native Object.keys().map() x 5,834,570 ops/sec ±0.29% (97 runs sampled)
    ✓  Native Object.values()) x 23,794,424 ops/sec ±0.29% (99 runs sampled)
    ✓  fast.values() x 11,353,450 ops/sec ±0.65% (94 runs sampled)
    ✓  underscore.values() x 5,779,038 ops/sec ±0.46% (94 runs sampled)
    ✓  lodash.values() x 3,933,563 ops/sec ±1.49% (96 runs sampled)

Maybe we could shim it in fast.js like Object.keys?

Allow fast to be used as drop-in polyfill

It be great to simply call require('fast.js').polyfill() and have the native methods replaced. This way every library will get faster and I won't need to use this methods explicitely.

It surely can cause conflicts but that's why it remains optional.

I'd love to get more speed but I don't want to be including and calling fast for every simple thing I do.

Add underscore to benchmarks

I think it's important that we try and keep fast.js faster than lodash and underscore.

If we added both libraries to the benchmarks, tests would run slower, we'd be able to do better what it looks like the main goal of this library is: to be fast.

Right? I'm working on a fork of it now.

Try using `fast.apply` in other methods.

I'm curious if how your fast.apply or its internal helpers affects other methods that are currently using .apply(...) have you tried using them in place of .apply(..)?

Modularize

Has any thought been given to a re-write where one could do require(fast/forEach) (or something similar) to just get that function? This would be particularly useful for situations where client-side code built with Browserify (or WebPack? I'm not too familiar with it) wanted to use specific functions without including the entire library. I'd be happy to contribute some time if there's significant interest.

bind considerations

Carried over from r14163945.

It was mentioned that fast.js is planned to be primarily used in Node.js. Many npm packages (ex: express event handlers) rely on the bound function's .length being set. I was bit by this too as my version doesn't set the .length of bound functions.

Also, I and later Underscore were bit by not handling the case when a bound function is called as constructor (ex: new bound). This broke use cases in Backbone.js. Not sure if you want to tackle that but it gave me some grief not supporting it early on.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.