codemix / fast.js Goto Github PK
View Code? Open in Web Editor NEWFaster user-land reimplementations for several common builtin native JavaScript functions.
License: MIT License
Faster user-land reimplementations for several common builtin native JavaScript functions.
License: MIT License
If this is the desired behavior then I think it deserves a quick mention in the comments associated with that method since Object.assign
copies own enumerable keys. Otherwise, I've always wondered which is faster:
for (key in obj) {
if (obj.hasOwnProperty(key)) {
// code
}
}
// or...
var keys = Object.keys(obj);
var len = keys.length;
for (var i = 0; i < len; i++) {
// code
}
Guess I'll benchmark it
I'm trying to run the JSPerf benchmark on an embedded system and jspm is failing so that I'm unable to run the tests. Can you include a compiled version of fast.js in the repo so people can test without having to point to a local copy of the lib?
I am not sure I titled this correctly but the idea is that it would be cool if this library could be used in a build step - gulp, grunt, others - to optimize code written natively.
Hi, I'm looking at the benchmark results in your README and it says:
fast.try() vs try {} catch (e) {}
✓ try...catch x 111,674 ops/sec ±1.75% (87 runs sampled)
✓ fast.try() x 3,645,964 ops/sec ±1.47% (81 runs sampled)
Winner is: fast.try() (3164.83% faster)
So I think "WTF? How did they implement try/catch in JS and how it could possibly be so fast?"... So I compared the sources:
exports['try'] = function fastTry (fn) {
try {
return fn();
}
catch (e) {
if (!(e instanceof Error)) {
return new Error(e);
}
else {
return e;
}
}
};
So, yeah. It is just a callback wrapped in try, of course. Then I check the benchmark.
exports['try...catch'] = function () {
try {
var d = 0;
factorial(10);
factorial(2);
return d;
}
catch (e) {
return e;
}
}
...and it is the same thing. So how such a big difference is possible?
Date.now()
before and after.exports['try']
gets JITed because exports['try...catch']
is run before it and the latter was not JITed but caused the former to be JITed for some reason. Also I think you should run each benchmark (and each sub-test of each benchmark) in this way in separate process to make sure different benchmark scenarios doesn't interfere with each other. In other words the benchmark for fast.js of try should run in separate process and this is all that runs here (twice in the way I described). Then the "bultin" try in separate process and this is all. Then the same for underscore and etc. and all benchmarks.exports['try...catch']
and exports['try']
are optimized in different way because for example one has 1 line inside the try block and one has 5 lines inside the try block, which is still good enough as long as it works. But I don't like it. Just like in V8 using Array.push() 1000 times is faster than using new Array( 1000 );
- they have some specific optimisation of the push function so it forces you to write .push() to yield faster code, but it was the opposite in earlier versions of V8. Same thing here with fast.js where we are calling two wrappers and they are faster than calling the code itself without wrappers. Sure, we should use it if it works in practice by I don't like writing code that is compiler specific because it is not reliable between compilers and even different versions of the same compiler.But hey, think about it - "3164.83% faster" for the same code. First it is misleading advertising, second you are hitting some kind of optimizer edge case - there is no doubt about it. No problem with the later but nevertheless, don't put misleading advertisement, because it could become slower in the next V8 release.
I'm just finishing up the testing code, and the places places lodash wins are on the 1000
array item benchmarks.
I believe this is because lodash uses while
loops, which perform slightly better on large arrays. Is there a reason to be using for loops I'm not seeing? If not, I'll do a new PR with while loops.
I was able to make this a bit faster by removing the variable definition for length
(also should use less memory). (Probably this trick can be used in a few more places too)
function fastLastIndexOf (subject, target) {
var length = subject.length,
i;
for (i = length - 1; i >= 0; i--) {
if (subject[i] === target) {
return i;
}
}
return -1;
};
vs
function myLastIndexOf (subject, target) {
for (var i = subject.length - 1; i >= 0; i--) {
if (subject[i] === target) {
return i;
}
}
return -1;
};
Great improvements today, can you give me a heads up when the new stuff goes up on NPM? Thanks!
https://github.com/codemix/fast.js/blob/master/dist/fast.js#L497-L520
in these codes, when args.length < 9
you use switch
and call
in different conditions, use apply
only when args.length >= 9
. Why just only use apply like:
function applyWithContext (subject, thisContext, args) {
return subject.apply(thisContext, args || []);
}
When using indexOf ît does not work properly...
var keyword = "du kannst zum teufel gehen";
var search = "zum teufel gehen";
console.log(fast.indexOf(string,keyword)); /// -1
Typed arrays have huge memory and cpu advantages over JavaScript Arrays.
Have you considered typed array usage? A project I'm working favors typed arrays over JavaScript arrays. Both for cpu speed and for memory efficiency.
And an extreme case, rather than using an Array of Objects, we use a single Object of Typed Arrays, where each variable in the object corresponds to a Typed Array. So foo[i].var === foo.var[i]. In a sense, it is struct-like.
Array#reduce
and libs that follow it allow the initial value to not be provided, using the first value of the given array as the initial value in those cases.
Would you perhaps be open to the idea of having typings using the likes of TypeScript? It may seem as to clash a bit with the fast.js
name, but to JS end-users the results would be the same anyway. :P
Based on http://jsperf.com/fast-vs-lodash I'm seeing numbers like so for forEach:
Firefox:
"fast": 47,688 ops/s
native: 123,187 ops/s
Chrome:
"fast": 71,070 ops/s
native: 20,112 ops/s
The "fast" versions are both faster than V8's builtin, but both slower than SpiderMonkey's builtin.
I see similar results for most of the fast.js functions, except indexOf/lastIndexOf, where it does better that both builtins.
Partial application is great and can be applied to constructors as well. For example:
function MyWidget(options) { /*..*/ }
MyWidget.prototype.get = function() { /*..*/ };
var MyWidgetWithCoolOpts = fast.partial(MyWidget, {/*some options*/});
var widget = new MyWidgetWithCoolOpts();
widget instanceof MyWidget // true
typeof widget.get // function
It could if it did an instanceof
check:
exports.partial = function fastPartial (fn) {
var boundLength = arguments.length - 1,
boundArgs;
boundArgs = new Array(boundLength);
for (var i = 0; i < boundLength; i++) {
boundArgs[i] = arguments[i + 1];
}
return function partialed() {
var length = arguments.length,
args = new Array(boundLength + length),
i;
for (i = 0; i < boundLength; i++) {
args[i] = boundArgs[i];
}
for (i = 0; i < length; i++) {
args[boundLength + i] = arguments[i];
}
/** new part **/
if (this instanceof partialed) {
var thisBinding = Object.create(func.prototype),
result = fn.apply(thisBinding, args);
return (Object(result) === result) ? result : thisBinding;
}
/** end */
return fn.apply(this, args);
};
};
Related to #13
I understand that you're tried to make code universal. But shouldn't it be faster if you're select only {{
and }}
, and then decide whats to do with parts? I am just curious about the claim. Is it true fastest, or not?
The current benchmark output looks like this:
Native .lastIndexOf() vs fast.lastIndexOf() (10 items)
✓ Array::lastIndexOf() x 17,124,729 ops/sec ±1.79% (92 runs sampled)
✓ fast.lastIndexOf() x 29,032,323 ops/sec ±1.78% (87 runs sampled)
✓ underscore.lastIndexOf() x 12,149,850 ops/sec ±1.82% (92 runs sampled)
✓ lodash.lastIndexOf() x 21,171,936 ops/sec ±1.74% (90 runs sampled)
Winner is: fast.lastIndexOf() (138.95% faster)
The relation for the 138.95% faster
is made up from the slowest candidate (here underscore.lastIndexOf()
) compared to the fastest one (in this case fast.lastIndexOf()
). Wouldn't it be more relevant to compare the speedup against the build in Array::lastIndexOf
all the time, as the goal of fast.js
is to compete with the build in functions?
Let me know what you think. I am happy to do the changes and provide a PR.
If your software is being used in production, it should probably already be 1.0.0.
(https://semver.org/spec/v2.0.0.html)
Fast.js seems to already be intended for production and has over 3,000 stars.
If you feel like you're too busy to write a changelog, consider using semantic-release
, which not only automates releases but generates a changelog in your GitHub releases from your (carefully written) commit messages.
Please go ahead and release version 1.0.0 so that
npm
can pull in the latest bug fixes when installing ^1.0.0
etc.function fn(args) {
"use asm";
// code
}
As per #78, adding documentation might be helpful. Unfortunately, this will probably kind of a pain in the butt since most methods adhere to the same signature as underscore/lodash.
Alternately, instead of full-fledged docs, the Readme could just point out some of the particular/unique decisions, such as the use of instanceof Array
to choose between array iterators and object iterators.
fastConcat()
can probably be improved, because from what I've seen the push()
function is usually slower than the vanilla code: array[i] = value;
Looking to problem described in http://toddmotto.com/ditch-the-array-foreach-call-nodelist-hack article:
var myNodeList = document.querySelectorAll('li'); // grabs some <li>
// Uncaught TypeError: Object #<NodeList> has no method 'forEach'
myNodeList.forEach(function (item) {
// :(
});
need to extend fast.forEach
with NodeList
The new function Object.values is x2 faster than fast.values on v8.
✓ Native Object.keys().map() x 5,834,570 ops/sec ±0.29% (97 runs sampled)
✓ Native Object.values()) x 23,794,424 ops/sec ±0.29% (99 runs sampled)
✓ fast.values() x 11,353,450 ops/sec ±0.65% (94 runs sampled)
✓ underscore.values() x 5,779,038 ops/sec ±0.46% (94 runs sampled)
✓ lodash.values() x 3,933,563 ops/sec ±1.49% (96 runs sampled)
Maybe we could shim it in fast.js like Object.keys?
It be great to simply call require('fast.js').polyfill()
and have the native methods replaced. This way every library will get faster and I won't need to use this methods explicitely.
It surely can cause conflicts but that's why it remains optional.
I'd love to get more speed but I don't want to be including and calling fast for every simple thing I do.
I think it's important that we try and keep fast.js faster than lodash and underscore.
If we added both libraries to the benchmarks, tests would run slower, we'd be able to do better what it looks like the main goal of this library is: to be fast.
Right? I'm working on a fork of it now.
A dev can provided undefined
as an initialValue
in those cases you'd want to use it and not the first element of an array.
Is Array.prototype.sort already max fast?
Native allows passing a fromIndex
to indexOf
or lastIndexOf
which is useful for when you have a known starting point.
I'm curious if how your fast.apply
or its internal helpers affects other methods that are currently using .apply(...)
have you tried using them in place of .apply(..)
?
Has any thought been given to a re-write where one could do require(fast/forEach)
(or something similar) to just get that function? This would be particularly useful for situations where client-side code built with Browserify (or WebPack? I'm not too familiar with it) wanted to use specific functions without including the entire library. I'd be happy to contribute some time if there's significant interest.
Carried over from r14163945.
It was mentioned that fast.js is planned to be primarily used in Node.js. Many npm packages (ex: express event handlers) rely on the bound function's .length
being set. I was bit by this too as my version doesn't set the .length
of bound functions.
Also, I and later Underscore were bit by not handling the case when a bound function is called as constructor (ex: new bound
). This broke use cases in Backbone.js. Not sure if you want to tackle that but it gave me some grief not supporting it early on.
Would be convenient.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.