tmcgee123 / karma-spec-reporter Goto Github PK
View Code? Open in Web Editor NEWKarma reporter, that prints each executed spec to commandline (similar to mocha's spec reporter).
License: MIT License
Karma reporter, that prints each executed spec to commandline (similar to mocha's spec reporter).
License: MIT License
I just added this reporter to the main AngularJS project, and it's looking really nice. However, I think there's a bug in the overall report, i.e. at the end of each run when the passed / failed / overall tests are counted. A specific browser disconnected and only ran a part of the tests. However, the spec-reporter writes "SUCCESS" in the overall report.
See https://travis-ci.org/angular/angular.js/jobs/241613711#L779
It looks like this project is not being actively maintained and this feature is not in the last published version. However, it would be really useful. In it's current state, it doesn't work but you can fix it with two lines. Can I submit a pull request? Ideally it would be nice to publish changes too
I want to output the logged report as a simpler version of test report to a text file, for example:
App Admin Service
init
✓ should init
Chrome 70.0.3538 (Mac OS X 10.12.2): Executed 287 of 287 SUCCESS (4.119 secs / 3.945 secs)
TOTAL: 287 SUCCESS
=============================== Coverage summary ===============================
Statements : 98.59% ( 912/925 )
Branches : 97.31% ( 326/335 )
Functions : 98.51% ( 199/202 )
Lines : 98.86% ( 868/878 )
================================================================================
Is this possible?
Thanks.
Currently, we have the ability to generate code coverage with istanbul. I've looked around and found that Code Climate has integration with Travis CI and the nice badge for us to add to the README. Here's the link to their website.
Here's the link for getting it set up with Travis.
I plan on spending time this week to get our coverage up to acceptable standards. I really like to use these coverage reports as it helps identify potential points for refactoring and gives a nice reading as to what has been tested.
Any thoughts or concerns?
Basically, I want them.
Since I use karma-webpack
in order to roll my tests through my WebPack config, I urgently need to be able to see backtraces. Because of some preprocessors, lines have an offset, or I get to see the wrong file entirely! But the backtrace itself would solve many, many problems.
That files arent displayed correctly is a karma-webpack
thing, fo' sho'. But having no idea where an error is coming from is problematic.
How can I get them?
Thanks for the plugin.
I'm having difficulty getting any 'passed' messages or change in output when I use the karam-spec-reporter plugin.
Karma v0.10.9
Jasmine 1.3.1
karma-spec-reporter 0.0.8
Output (included/excluded files and stack traces removed). Note 3 pass, 5 fail:
DEBUG [plugin]: Loading plugin karma-junit-reporter.
DEBUG [plugin]: Loading plugin karma-phantomjs-launcher.
DEBUG [plugin]: Loading plugin karma-spec-reporter.
DEBUG [plugin]: Loading plugin karma-jasmine.
INFO [karma]: Karma v0.10.9 server started at http://localhost:9876/
INFO [launcher]: Starting browser PhantomJS
DEBUG [launcher]: Creating temp dir at /tmp/karma-43222509
DEBUG [launcher]: /usr/local/lib/node_modules/phantomjs/lib/phantom/bin/phantomjs /tmp/karma-43222509/capture.js
DEBUG [watcher]: Resolved files:
DEBUG [web-server]: serving: /usr/local/lib/node_modules/karma/static/client.html
DEBUG [web-server]: serving: /usr/local/lib/node_modules/karma/static/karma.js
DEBUG [karma]: A browser has connected on socket oG3w5OxEZpd_JS-ug_Jv
INFO [PhantomJS 1.9.7 (Linux)]: Connected on socket oG3w5OxEZpd_JS-ug_Jv
DEBUG [karma]: All browsers are ready, executing
DEBUG [web-server]: serving: /usr/local/lib/node_modules/karma/static/context.html
DEBUG [web-server]: serving: /usr/local/lib/node_modules/karma-jasmine/lib/jasmine.js
PhantomJS 1.9.7 (Linux) Sometest should .... FAILED
PhantomJS 1.9.7 (Linux) Sometest should .... FAILED
PhantomJS 1.9.7 (Linux) Sometest should .... FAILED
PhantomJS 1.9.7 (Linux) Sometest should .... FAILED
undefined
...
PhantomJS 1.9.7 (Linux) Sometest FAILED
undefined
PhantomJS 1.9.7 (Linux): Executed 8 of 8 (5 FAILED) (1.291 secs / 0.088 secs)
DEBUG [launcher]: Disconnecting all browsers
DEBUG [launcher]: Killing PhantomJS
DEBUG [launcher]: Process PhantomJS exitted with code 0
DEBUG [launcher]: Cleaning temp dir /tmp/karma-43222509
My karma.conf.js
excerpt:
module.exports = function(config) {
config.set({
.
.
.
frameworks: ['jasmine'],
browsers : ['PhantomJS'],
plugins : [
'karma-junit-reporter',
'karma-phantomjs-launcher',
'karma-spec-reporter',
'karma-jasmine'
],
reporters : [
'spec'
],
logLevel : config.LOG_DEBUG,
junitReporter : {
outputFile: 'test_out/unit.xml',
suite: 'unit'
}
});
}
When using autorun it'd be nice to be able to see the full tests (passed, skipped, failed) at the start and just the errors thereafter.
It seems like the version published as 0.0.24
doesn't match what's on GitHub. I've installed 0.0.24
locally, but it doesn't have any code for showSpecTiming
. It still has the TODO
on line 85:
//TODO: add timing information
Was the wrong version published to NPM?
Hi there,
When I have colors: false
in my karma configuration, I would expect any colors used in this reporter to also disable. However, when a test fails I am still seeing the failed test appear in red and grey. This doesn't work particularly well with my terminal color scheme, which causes the stack trace to disappear since the background is the same grey color being used in the output message.
It appears related to the logFinalErrors
function. Here's a snippet of the code where I believe the problem is occurring.
this.writeCommonMsg((index + ') ' + failure.description + '\n').red); << here
this.writeCommonMsg((this.WHITESPACE + failure.suite.join(' ') + '\n').red); << here
failure.log.forEach(function (log) {
if (reporterCfg.maxLogLines) {
log = log.split('\n').slice(0, reporterCfg.maxLogLines).join('\n');
}
this.writeCommonMsg(this.WHITESPACE + formatError(log)
.replace(/\\n/g, '\n').grey); << here
}, this);
You can see the .red
and .grey
appended to the string being passed to writeCommonMsg
.
Any help here would be appreciated.
Why do I have a separate install for something that should be included by default?
I need to view a list of passing and failing tests. Period.
In my karma config, when I have:
reporters: ['dots', 'junit', 'jasmine-seed']
My specs run just fine.. But as soon as I change that to:
reporters: ["spec"],
specReporter: {
maxLogLines: 5, // limit number of lines logged per test
suppressErrorSummary: true, // do not print error summary
suppressFailed: false, // do not print information about failed tests
suppressPassed: false, // do not print information about passed tests
suppressSkipped: true, // do not print information about skipped tests
showSpecTiming: false, // print the time elapsed for each spec
failFast: true // test would finish with error when a first fail occurs.
},
plugins: ["karma-spec-reporter"],
I get:
23 12 2021 11:39:21.195:ERROR [plugin]: Cannot load "webpack", it is not registered!
Perhaps you are missing some plugin?
23 12 2021 11:39:21.196:ERROR [plugin]: Cannot load "sourcemap", it is not registered!
Perhaps you are missing some plugin?
23 12 2021 11:39:21.197:ERROR [karma-server]: Server start failed on port 9876: Error: No provider for "framework:webpack"! (Resolving: framework:webpack)
When I run my Jasmine unit tests with karma spec reporter, in the logs I only see the text from Jasmine describes, but I would like to also see the text from the its.
How can I do this?
I installed karma-spec-reporter v0.0.20 and noticed that the maxLogLines config option was not working. I walked backwards through your versions and found that v0.0.16 was the last version to support the maxLogLines config option.
Was it intentional to remove that option?
It'd be really nice if you could add a LICENSE file clarifying what, if any, OS license karma-spec-reporter
is available under
hi, i would be great if we print the current and total case numbers with all the new spec file run.
can u guide me, where the files are to make the changes. So that i can contribute my PR fo this.
... and have the specs grouped by browser so I can better understand where my tests pass/fail.
The ✔/✖ chars show up as boxes in CMD
. Mocha got around this by using fallbacks of √ instead of ✔, × instead of ✖
I would like to run npm test with karma-spec-reporter and have it fail the suite (i.e exit with non-0 code) if there are skipped tests
One attempt at doing this is detailed here:
Suggestion: add a "summaryOnly" boolean config flag. When set, this suppresses the error summary and effectively sets the maxLogLines to 0. So we see all the "describe" lines, all the "it" lines decorated with colour and tick/cross for pass/fail, and that's it.
After upgrading to karma v0.13.20 I no longer see any console output. Works fine in v0.13.19.
see mochajs/mocha#1170
I think this plugin/reporter is the best place to implement this feature.
I know it's a minor thing, but it would be nice if the status characters stood out more so when I visually scan the results I can quickly identify the start of each failure message. As it is, the 'x' seems to blend in with the failure message.
The success indicator isn't such an issue, because √ doesn't look like a regular character, unlike x. Still, it would be nice if it was green.
Example of the sort of test output I'm talking about:
ClassName
General category
MethodName
× has the expected value when stuff happens (1ms)
Expected 'bad value' to equal 'expected value'.
stack trace....
I have a few ideas of how this could be done (in order of decreasing preference)
If the x was colored in a way that stands out. This could be added as a config option: colors.success, colors.skipped, and colors.failure.
Change the indentation so that the 'x's stand out more
ClassName
General category
MethodName
× has the expected value when stuff happens (1ms)
Expected 'bad value' to equal 'expected value'.
stack trace....
Note: this is already the case for successful tests, since there is no error detail below.
ClassName
Method
√ does good stuff in the happy case (25ms)
√ does good stuff in such and such a corner case (10ms)
√ does the right bad stuff when such and such a precondition isn't met (12ms)
So aligning the error details with the test description rather than with the status indicator character might be the most consistent way.
Actually I don't really like this idea, but basically it's changing the status character from 'x' to '[x]' so that it doesn't look like a regular character. The first two ideas seem much nicer to me.
ClassName
General category
MethodName
[×] has the expected value when stuff happens (1ms)
Expected 'bad value' to equal 'expected value'.
stack trace....
My test run prints
Chrome Headless 110.0.5481.96 (Linux x86_64): Executed 964 of 1017 SUCCESS (1 min 12.64 secs / 44.511 secs)
TOTAL: 964 SUCCESS
Two things are not clear to me:
Executed 964 of 1017
- what does 1017 refer to here? I don't see any skipped or pending tests in the test output (I set suppressSkipped: false
) so I don't understand the discrepancy1 min 12.64 secs / 44.511 secs
- what are those two numbers and why the difference?Couldn't find any documentation on either question
I am not sure if I have missed a config or so ...
But the current output I get is:
MyFirstDescribeSpec
✓ first it() within the describe (735ms)
✓ second it() within the describe (321ms)
✓ third it() within the describe (287ms)
MySecondDescribeSpec
✓ first testcase (735ms)
✓ second testcase (321ms)
Is it possible to show which number of the testcase. Something like:
1. MyFirstDescribeSpec
✓ 1.1 first it() within the describe (735ms)
✓ 1.2 second it() within the describe (321ms)
✓ 1.3 third it() within the describe (287ms)
2. MySecondDescribeSpec
✓ 2.1 first testcase (735ms)
✓ 2.2 second testcase (321ms)
And if there are nested describes, we can just make it 1.1.1
and so on
While I know a progress bar is not feasible because of how karma works. If I know approx that my test suite has 100 describes - this would be very useful for me to know the progress so far.
running a failing test with 'colors: false' messes-up output in some consoles (e.g., intelliJ run console)
The release description doesn't contain information about changes. So a changelog is necessary.
Hi guys
When i use with Karma-Jasmine with fdescribe, fiit, it stills shows all the spec in console.
LastMileAssignRunsheetViewStore
✓ should return empty runsheets at the beginning
✓ should return edit mode = false at the beginning
✓ should subscribing LOADING_DRIVERS_SUCCESS event
Case subscribing SELECT_DRIVER
✗ Should add packages to runsheet
TypeError: 'undefined' is not an object (evaluating 'LastMileAssignRunsheetViewStore.runsheets['123'].packages')
at /stores/LastMileAssign/__tests__/LastMileAssignRunsheetViewStore-test.js:139
ErrorAlertComponent
- Should not render ErrorAlertComponent if store has no error
- Should render ErrorAlertComponent if store has error
I am using Karma 0.11.0 with karma-spec-reporter 0.0.5. On using the 'spec' reporter I get the following error:
https://gist.github.com/brijsrivastava/6992182
I fixed the error by creating an empty this._browsers array inside this.onRunStart method.
So this is the new onRunStart method:
this.onRunStart = function(browsers) {
this._browsers = [];
browsers.forEach(function(browser) {
// useful properties
browser.id;
browser.fullName;
});
};
Kindly let me know if this is a correct fix?
Thanks!
Our build npm script concatenates multiple npm commands including running unit tests.
When failFast
option is enabled, if there is a failing unit test, the unit tests terminate it doesn't stop the build and continues to the next npm command (running e2e tests). If I disable failFast
, the build terminates as expected at the end of the unit test command.
It looks like karma-spec-reporter throws an error if failFast
is enabled. Does this emit the same ERROR signal as a failing test without the failFast
enabled?
I am using the following versions of packages and Angular v6:
"karma": "~1.7.1",
"karma-chrome-launcher": "~2.2.0",
"karma-coverage-istanbul-reporter": "~1.4.2",
"karma-jasmine": "~1.1.1",
"karma-jasmine-html-reporter": "^0.2.2",
"karma-junit-reporter": "^1.2.0",
"karma-spec-reporter": "0.0.32",
Hey,
with one of your last updates, you changed your dependencies definition:
- "colors": "1.4.0"
+ "colors": "^1.4.0"
This resolves to install [email protected]
which is broken since january 2022 (see for example this issue).
Please change the dependency definition back to the pinned version "1.4.0"
.
Although the version 1.4.2
was removed from official npm registry, it is still available on some registry mirrors (e.g. company-driven internal registries with npm mirrors).
Thanks.
@FjVillar Here is the pull request you asked for earlier:
karma-runner/karma#2742
I hope it gets through at some point...
Hey
I was wondering if we could include the use of browserConsoleLogOptions.level to check if the log level is higher than the one set
I believe it should be something around here: https://github.com/mlex/karma-spec-reporter/blob/master/index.js#L126-L135
This would enable us to suppress the info and warning messages for example but keep the errors
Cheers
I would like the seed that is used for randomizing the tests to be printed out.
If I use random: true
configuration option for jasmine and do not provide my own seed , the seed gets generated. I would like to know what that seed is, so that I could rerun my tests with that seed, i.e. in the same exact order.
I am trying to troubleshoot several tests that are failing randomly depending on the execution order.
Thank you.
Related to #24, when the config.specReporter
is not defined, the error shown in the title appears.
There should be also a check for the existence of config.specReporter
.
Rather than showing an example project link, why not have install instructions in the readme?
npm install karma-spec-reporter --save-dev
then add 'spec' to reporters in karma.conf.js, ala
reporters: ['progress','coverage', 'spec'],
Hi,
Awesome reporter! I ran into one issue though: when I run my tests with the --no-color
flag or the colors: false
setting, the reporter still use colors. This screws up the output in my logs a bit.
CI will allow us to enforce code coverage standards and enable us to catch any errors that may happen when anyone makes changes.
With Mocha for server side tests, tests that log info display nicely. For example:
✓ test 1
✓ test 2
✓ test 3
Some logging
More logging
✓ test 4
More logging
✓ test 5
More logging
✓ test 6
With karma-spec-runner, logging during running tests is unreadable and comes out more like this:
✓ test 1
✓ test 2
✓ test 3 LOG: LOG: Some logging LOG: LOG: More logging ✓ test 4 LOG: LOG: More logging ✓ test 5 LOG: LOG: More logging ✓ test 6
It would make the output much better.
This is how it is:
canActivate
✓ should rename sessionid to sessionId
✗ should work with probable future naming of sessionId Error: expected true to equal false
at /home/martin/ok/minside/node_modules/karma-expect/node_modules/expect.js/expect.js:99
at /home/martin/ok/minside/node_modules/karma-expect/node_modules/expect.js/expect.js:203
//etc
✓ should navigate to the returnTo adress
This IMO would be better:
canActivate
✓ should rename sessionid to sessionId
✗ should work with probable future naming of sessionId
Error: expected true to equal false
at /home/martin/ok/minside/node_modules/karma-expect/node_modules/expect.js/expect.js:99
at /home/martin/ok/minside/node_modules/karma-expect/node_modules/expect.js/expect.js:203
//etc
✓ should navigate to the returnTo adress
hi, in local machine is printing the tick character correctly. but in jenkins it is printing as [32m✓ �[39m
while viewing in web browser. Is there any configuration to change the tick mark?
I am running a lot of unit tests and would like to see a better aggregate of failed tests so that I don't have to scroll up the stack trace .
Karma 0.10.0 is now out. The package.json of this plugin is held back to ~0.9.0.
I have modified my local copy to ">=0.9.0" and everything works great in my system.
Please update the package.json to allow 0.10.0 of karma.
Thanks.
I am experiencing a problem where if there are enough errors while running the tests that the test reporter just seems to continue to dump error information into the terminal, but no longer shows what tests it is even referring to. Any idea what might be the cause of this?
I use karma-spec-reporter together with karma-summary-reporter. It leads to duplicating the per-browser summary information. Spec reporter summary gives no additional information, so I want to disable it to make the logs cleaner.
...
12 05 2021 02:32:52.779:INFO [Chrome 91.0.4472.19 (Mac OS 10.15.7)]: Connected on socket osI9sAFQMqkFcfciAAAV with id 6541155
12 05 2021 02:32:58.732:INFO [launcher]: Starting browser Safari (iOS 11) on BrowserStack
12 05 2021 02:32:59.567:INFO [launcher.browserstack]: Safari (iOS 11) session at https://automate.browserstack.com/builds/d32acb61ba0b49a773fd15343582e04f9bccf3a2/sessions/d89e9e49436da887bc156bf712b504470d065874
12 05 2021 02:33:03.638:INFO [Firefox 89.0 (Mac OS 10.15)]: Connected on socket Qpnmm3J-EIuyHyOvAAAX with id 41180324
12 05 2021 02:33:08.015:INFO [launcher]: Starting browser Safari (iOS 12) on BrowserStack
12 05 2021 02:33:09.416:INFO [launcher.browserstack]: Safari (iOS 12) session at https://automate.browserstack.com/builds/d32acb61ba0b49a773fd15343582e04f9bccf3a2/sessions/b35b9d7105853466cf297d4f1446e8504941c1e7
12 05 2021 02:33:22.191:INFO [Edge 91.0.864.11 (Mac OS 10.15.7)]: Connected on socket tjNjMfJloyS493kIAAAZ with id 88171334
12 05 2021 02:33:27.483:INFO [Chrome 89.0.4389.105 (Android 10)]: Connected on socket dLKpCKmiL4GLakhMAAAb with id 31260405
12 05 2021 02:33:29.625:INFO [launcher]: Starting browser Safari (iOS 13) on BrowserStack
12 05 2021 02:33:29.745:INFO [launcher]: Starting browser Safari (iOS 14) on BrowserStack
12 05 2021 02:33:31.147:INFO [launcher.browserstack]: Safari (iOS 14) session at https://automate.browserstack.com/builds/d32acb61ba0b49a773fd15343582e04f9bccf3a2/sessions/f2af78f96a581d0b70a6d5f3e1ca1465dacac787
12 05 2021 02:33:46.894:INFO [Mobile Safari 10.0 (iOS 10.3.1)]: Connected on socket ILqp3FmR_cyJGt9wAAAd with id 37773618
12 05 2021 02:34:13.867:INFO [Mobile Safari 11.0 (iOS 11.2.5)]: Connected on socket ySZOxO0Kf6OazwM4AAAf with id 30807524
12 05 2021 02:34:45.152:INFO [Mobile Safari 13.0.4 (iOS 13.3)]: Connected on socket FLHmVI-CROwC1EHmAAAh with id 73324590
12 05 2021 02:34:49.601:INFO [Mobile Safari 14.0 (iOS 14.0.1)]: Connected on socket 7JngK1BGJoLPbWylAAAj with id 2497055
12 05 2021 02:35:00.428:WARN [launcher.browserstack]: Safari (iOS 12) has not captured in 60000 ms, killing.
12 05 2021 02:35:01.529:INFO [launcher.browserstack]: Safari (iOS 12) session at https://automate.browserstack.com/builds/d32acb61ba0b49a773fd15343582e04f9bccf3a2/sessions/45ed1355264adcad0ba7fe62c8be74fc509495ea
12 05 2021 02:35:21.532:INFO [Mobile Safari 12.1.1 (iOS 12.3.1)]: Connected on socket xx0bWfVN-4xgfXJYAAAl with id 52294822
IE 11.0 (Windows 7): Executed 51 of 57 (skipped 6) SUCCESS (0.877 secs / 0.581 secs)
Chrome 91.0.4472.19 (Windows 10): Executed 52 of 57 (skipped 5) SUCCESS (1.3 secs / 0.988 secs)
Edge 18.18363 (Windows 10): Executed 51 of 57 (skipped 6) SUCCESS (1.232 secs / 0.908 secs)
Edge 91.0.864.11 (Windows 10): Executed 52 of 57 (skipped 5) SUCCESS (1.073 secs / 0.891 secs)
Chrome 42.0.2311.135 (Windows 10): Executed 52 of 57 (skipped 5) SUCCESS (1.427 secs / 1.18 secs)
Firefox 48.0 (Windows 10): Executed 51 of 57 (skipped 6) SUCCESS (1.031 secs / 0.728 secs)
Firefox 89.0 (Windows 10): Executed 51 of 57 (skipped 6) SUCCESS (1.423 secs / 1.07 secs)
Safari 9.1.3 (Mac OS 10.11.6): Executed 54 of 57 (skipped 3) SUCCESS (4.889 secs / 1.932 secs)
Safari 12.1.2 (Mac OS 10.14.6): Executed 54 of 57 (skipped 3) SUCCESS (1.865 secs / 0.525 secs)
Safari 14.0.3 (Mac OS 10.15.6): Executed 54 of 57 (skipped 3) SUCCESS (1.75 secs / 0.787 secs)
Chrome 91.0.4472.19 (Mac OS 10.15.7): Executed 52 of 57 (skipped 5) SUCCESS (1.399 secs / 0.56 secs)
Firefox 89.0 (Mac OS 10.15): Executed 51 of 57 (skipped 6) SUCCESS (1.307 secs / 0.424 secs)
Edge 91.0.864.11 (Mac OS 10.15.7): Executed 52 of 57 (skipped 5) SUCCESS (1.442 secs / 0.697 secs)
Chrome 89.0.4389.105 (Android 10): Executed 53 of 57 (skipped 4) SUCCESS (1.071 secs / 0.739 secs)
Mobile Safari 10.0 (iOS 10.3.1): Executed 55 of 57 (skipped 2) SUCCESS (0.832 secs / 0.665 secs)
Mobile Safari 11.0 (iOS 11.2.5): Executed 55 of 57 (skipped 2) SUCCESS (0.621 secs / 0.479 secs)
Mobile Safari 13.0.4 (iOS 13.3): Executed 55 of 57 (skipped 2) SUCCESS (0.713 secs / 0.382 secs)
Mobile Safari 14.0 (iOS 14.0.1): Executed 55 of 57 (skipped 2) SUCCESS (0.632 secs / 0.456 secs)
Mobile Safari 12.1.1 (iOS 12.3.1): Executed 55 of 57 (skipped 2) SUCCESS (0.58 secs / 0.424 secs)
TOTAL: 1005 SUCCESS
SUMMARY
0: IE 11.0 (Windows 7): Executed 51 of 57 (skipped 6) SUCCESS (0.877 secs / 0.581 secs)
1: Chrome 91.0.4472.19 (Windows 10): Executed 52 of 57 (skipped 5) SUCCESS (1.3 secs / 0.988 secs)
2: Edge 18.18363 (Windows 10): Executed 51 of 57 (skipped 6) SUCCESS (1.232 secs / 0.908 secs)
3: Edge 91.0.864.11 (Windows 10): Executed 52 of 57 (skipped 5) SUCCESS (1.073 secs / 0.891 secs)
4: Chrome 42.0.2311.135 (Windows 10): Executed 52 of 57 (skipped 5) SUCCESS (1.427 secs / 1.18 secs)
5: Firefox 48.0 (Windows 10): Executed 51 of 57 (skipped 6) SUCCESS (1.031 secs / 0.728 secs)
6: Firefox 89.0 (Windows 10): Executed 51 of 57 (skipped 6) SUCCESS (1.423 secs / 1.07 secs)
7: Safari 9.1.3 (Mac OS 10.11.6): Executed 54 of 57 (skipped 3) SUCCESS (4.889 secs / 1.932 secs)
8: Safari 12.1.2 (Mac OS 10.14.6): Executed 54 of 57 (skipped 3) SUCCESS (1.865 secs / 0.525 secs)
9: Safari 14.0.3 (Mac OS 10.15.6): Executed 54 of 57 (skipped 3) SUCCESS (1.75 secs / 0.787 secs)
10: Chrome 91.0.4472.19 (Mac OS 10.15.7): Executed 52 of 57 (skipped 5) SUCCESS (1.399 secs / 0.56 secs)
11: Firefox 89.0 (Mac OS 10.15): Executed 51 of 57 (skipped 6) SUCCESS (1.307 secs / 0.424 secs)
12: Edge 91.0.864.11 (Mac OS 10.15.7): Executed 52 of 57 (skipped 5) SUCCESS (1.442 secs / 0.697 secs)
13: Chrome 89.0.4389.105 (Android 10): Executed 53 of 57 (skipped 4) SUCCESS (1.071 secs / 0.739 secs)
14: Mobile Safari 10.0 (iOS 10.3.1): Executed 55 of 57 (skipped 2) SUCCESS (0.832 secs / 0.665 secs)
15: Mobile Safari 11.0 (iOS 11.2.5): Executed 55 of 57 (skipped 2) SUCCESS (0.621 secs / 0.479 secs)
16: Mobile Safari 13.0.4 (iOS 13.3): Executed 55 of 57 (skipped 2) SUCCESS (0.713 secs / 0.382 secs)
17: Mobile Safari 14.0 (iOS 14.0.1): Executed 55 of 57 (skipped 2) SUCCESS (0.632 secs / 0.456 secs)
18: Mobile Safari 12.1.1 (iOS 12.3.1): Executed 55 of 57 (skipped 2) SUCCESS (0.58 secs / 0.424 secs)
all 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Browser utilities
engine detection
detects iPad ✔ - - - - - - - - - - - - - - ✔ ✔ ✔ ✔ ✔
detects desktop Safari ✔ - - - - - - - ✔ ✔ ✔ - - - - ✔ ✔ ✔ ✔ ✔
detects Chromium 86+ ✔ - ✔ - ✔ ✔ - - - - - ✔ - ✔ ✔ - - - - -
detects Safari 12+ ✔ - - - - - - - ✔ ✔ ✔ - - - - ✔ ✔ ✔ ✔ ✔
Sources
domBlockers
handles absence of blockers ✔ - - - - - - - ✔ ✔ ✔ - - - ✔ ✔ ✔ ✔ ✔ ✔
handles blocked selectors ✔ - - - - - - - ✔ ✔ ✔ - - - ✔ ✔ ✔ ✔ ✔ ✔
returns `undefined` for unsupported browsers ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ - - - ✔ ✔ ✔ - - - - - -
50 more test cases successful in all browsers
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.