Comments (17)
@twalpole so then cuprite vs poltergeist is only valid
from cuprite.
@twalpole in fact I don't see where it's set for selenium as where it's set in poltergeist/cuprite to 0
from cuprite.
@route Capybara resets the time to 1 at https://github.com/teamcapybara/capybara/blob/master/lib/capybara/spec/spec_helper.rb#L26 which is called after every test. Cuprite is resetting that to 0 in a before block at - https://github.com/machinio/cuprite/blob/master/spec/spec_helper.rb#L122 - which technically means that it's not actually valid to claim compatibility with Capybara since tests aren't actually using the wait times Capybara expects (same with Poltergeist which I had not noticed does the same thing). It would be interesting to see what the timing is for Cuprite on your hardware without the wait time time since there really isn't anything that should make it much faster than selenium locally.
from cuprite.
@twalpole oh wow I didn't notice it either even in poltergeist :) haha) ok let's see what we have
from cuprite.
If I remove this line from spec helper I get for cuprite:
Finished in 8 minutes 44 seconds (files took 1.22 seconds to load)
1533 examples, 0 failures, 147 pending
yea that's more close to selenium:
Finished in 9 minutes 3 seconds (files took 5.98 seconds to load)
from cuprite.
That's more like I would expect with the tests being skipped, etc -- they really should be approximately equal in speed, when run with the same settings, just with Cuprite able to support more features.
from cuprite.
with poltergiest being on the 3rd place I'm surprised
Finished in 11 minutes 49 seconds (files took 0.54019 seconds to load)
from cuprite.
Not really a surprise -- the Capybara test suite is not like a real projects tests suite. It purposely does a lot of things a user would/should not do to test edge case behaviors. This means any timings for its test suite really aren't relevant to real world project timing.
from cuprite.
yea but still all three in the same environment right? but phantomjs is being slower than chrome
from cuprite.
yea -- chrome has come a long way in the time while no phantoms development occurred -- speedup in the browser (and headless mode) should have made it similar in speed when a large part of the Capybara test suite slowness is it intentionally waiting for things to happen/not happen.
from cuprite.
Updated README, thanks for pointing this out!
from cuprite.
@route Thank you for a fantastic library -- we have been working on upgrading a large Rails app from 4.2 and capybara-webkit to 5.0, and have found Cuprite/Ferrum to be pretty close to a drop-in replacement for capybara-webkit. We use too many JS features for selenium, and Apparition started with a ton of test failures mostly due to timing issues.
The one issue I'm trying to understand is why our test suite has slowed by 50-100% as we re-enter the modern era of browser-based testing. We have ~2,000 tests, and with Capybara 2.18/capybara-webkit/Ruby 2.5 we ran in ~20 minutes.
With Capybara 3.30/Cuprite using headless Chrome and a whitelist, we started at 40 minutes with Ruby 2.5. Upgrading to Ruby 2.6 improved that to 33 minutes, but that's still a 50% performance penalty.
Benchmarks have been hard to come by online; are you aware of any reason why headless CDP would be significantly slower than the relatively ancient capybara-webkit? I was trying to dig into the CDP implementation of Ferrum versus Apparition to better understand how the two libraries deal with waiting for asynchronous browser events -- it seems that Apparition is not working for us because it is trying to be too fast, and not waiting long enough for basic things like the page's application.js file to be loaded... while Cuprite passes all of our capybara-webkit tests albeit relatively slowly.
Any thoughts greatly appreciated! Trying to have the best of both worlds :-)
from cuprite.
@lawso017 You are correct about the waiting, Cuprite
indeed waits for some events to happen and only then proceeds and this happens for many methods not only goto
. If you click or evaluate JS this can start page navigation and of course we have to wait until page fully loads.
As for speed I guess we had the same issues after switching from Poltergeist
. I've seen that time almost has doubled (if you run all your tests subsequently) from 10m to 19m and investigated it with merging some improvements to Ferrum
that worked. What I can say now is CDP as protocol is not slow in spite of many messages passing between client and server, I thought network interruption maybe a reason for slowing down tests but looks like it's insignificant either though has some impact. Comparing whole tests suits gave me conclusions that Poltergeist starts to speed up on resets between tests and subsequent requests to the application (which may involve cache?) but comparing tests one by one there's no clear winner Chrome as fast (or even faster sometimes) as Poltergeist.
Anyways after spending some time on speed improvements and comparing results on CircleCI with parallel builds the difference was only 1-3m in comparison to Poltergeist so we decided that modern browser is better than outdated one even though it is slower, but I'm afraid it's not that simple to fix, requires a lot of time and energy and maybe Chrome related which makes it even harder because they barely answer even on simple issues like ChromeDevTools/devtools-protocol#125 and ChromeDevTools/devtools-protocol#145
So for now I stopped investigating speed issues and started to work on features to make Cuprite/Ferrum to be the best tools to work with CDP in Ruby but only have 2 hands lol :)
I may revisit performance issue once again in the future after implementing important features.
from cuprite.
@route thank you for that context -- I also noticed that disconnect between individual tests running quickly, but the overall test suite being relatively slow by comparison. That's interesting and will continue to keep an eye on that for future exploration! In the meantime we're also happy to be testing with a modern browser again.
from cuprite.
@lawso017 Surprisingly I figured that Capybara.server = :puma
adds 2.5 minutes to the build for our application, check if this is the case for you. I'm investigating it now. You may reduce your build time with :webrick
lol
from cuprite.
@route I have been unable replicate a speed improvement using Capybara.server = :webrick -- that slows it down by a couple minutes relative to puma in our environment.
I have observed something else of interest, though... we are building on CircleCI using two containers and I was seeing some sporadic failures due to timeouts with the default 5 sec browser timeout. As I increased the timeout, however, the rspec job became much slower... when profiling the run, it looks like increasing the timeout is causing a slower overall run for some reason.
Here's an example comparing two successful runs:
10 sec timeout:
Top 10 slowest examples (296.26 seconds, 29.8% of total time):
Top 10 slowest examples (247.2 seconds, 21.8% of total time):
=> 27.1 sec avg across the 20 slowest examples, 19:54 total time
15 sec timeout:
Top 10 slowest examples (368.18 seconds, 28.6% of total time):
Top 10 slowest examples (373.79 seconds, 30.0% of total time):
=> 37.1 sec avg across the 20 slowest examples, 22:49 total time
Looking at Ferrum, it seems like the key line is simply data = pending.value!(@browser.timeout)
in browser/client.rb's command method.
It does not seem like increasing the timeout should reduce the responsiveness of the call to pending.value!, but that is what appears to be happening... and I've not used concurrent-ruby before.
I would have expected that increasing the timeout would allow for occasional slow responses without generating a Timeout error, but not in general result in overall slower performance. In my case, increasing the timeout makes our test suite take longer. A run with a 30 sec timeout topped out at 34:58 total time.
Curious if you've seen anything like that in your experience...
from cuprite.
@lawso017 I haven't found the issue with Capybara.server
but that means the issue is in our application then.
In Ferrum/Cuprite there are a few places that are related to timeout but in general if your test is passing then it usually means test is not properly written but it can be a bug somewhere. I've seen some cases even in our application I had to rewrite tests a bit but can't remember now. Run FERRUM_DEBUG=true bundle exec rspec spec/file
for one of the suspicious tests with increased timeout and send me the log file to email, I'll save you some time I can find the issue pretty quick.
from cuprite.
Related Issues (20)
- Proxy does not get set
- How do I debug the behavior of this gem?
- Cuprite does not appear to support `evaluate_script` HOT 1
- Setting a color input doesn't fire the change or input events
- Process forking causing cuprite/ferrum to never exit HOT 1
- 500 error, possibly related to web sockets with Ferrum
- Race condition leading to `Argument should belong to the same JavaScript world as target object` HOT 2
- What the best way to play with multiple/new windows or tabs? HOT 1
- fill_in with empty string does not call addEventListener('input') HOT 1
- Visibility issue upon replacing hidden nodes (Race condition)
- keydown event sent without key property HOT 5
- Obsolete nodes are not refreshed when using Capybara's #within and/or #synchronize HOT 2
- Release that references ferrum 0.14.0
- Scraping Blocked on Indeed Site HOT 1
- Capybara::Cuprite::InvalidSelector
- Loading all JS files from asset pipeline and packs folder for testing with capybara/rspec
- Tracing in tests
- Incorrect Element.scrollWidth property value
- compatibility with Emulation.setVirtualTimePolicy HOT 9
- Release/Publish new version of cuprite that depends on ferrum 0.15.0 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cuprite.