Git Product home page Git Product logo

Comments (3)

Krinkle avatar Krinkle commented on August 15, 2024

That sounds great to me! I've seen ad-hoc ways to do this in a few places already. Most recently, I did one myself here in this repo to speed up the CLI tests.

function concurrentMapKeys (input, concurrency, asyncFn) {

QUnit.module('CLI Main', () => {
QUnit.test.each('fixtures',
// Faster testing: Let the commands run in the background with concurrency,
// and only await/assert the already-started command.
concurrentMapKeys(readFixtures(FIXTURES_DIR), 0, (runFixture) => runFixture()),
async (assert, fixture) => {
assert.timeout(10000);
const result = await fixture;
assert.equal(result.snapshot, result.expected, result.name);
}

A few things that come to mind:

  • Hooks should be fine. These tend to be contextual and well-written in practice. Although any use of lexical variables will of course be racey. That's something to document as gotcha. But because we already automatically create and tear down these context objects, I'm not too worried about having multiple of them.
  • Modules can already overlap via QUnit.config.reorder. I think you're proposing concurrency within a single module only, but this means we could potentially go further.
  • Events are where we could get into some difficulty. I believe there's a pretty strong assumption that testStart/log/testEnd happen in that order and don't overlap. Given that reporters are not usually maintained by people writing the tests, this is not something you can take care of by writing tests differently/better. We may want to fake a thing or two here so that we neatly present a sequential timeline there, e.g. buffer events and make sure they get reported FIFO based on the order in which they start. So that if test 1 starts and takes a second, test 2 and 3 run meanwhile, but won't be reported until test 1 is finished.
  • Runtime metrics. These would presumably remain measured within a single test. But it means it's going to become common for a test to be wrongly reported as slow just because it did 1 async thing, and then the main thread goes to a different test that's slow, and its time gets assigned/observed by another. In addition, the total runtime would no longer be a sum. I think that's already the case, afaik we measure that separately, so total runtime should remain walltime total, not double-counting. We may want to add a boolean flag in the event data so that reporters can either choose to report the module runtime only, or to mark it as "async" so that readers know these numbers may be off.

In terms of API, the natural choice is a submethod indeed, but we do have a lot of them now:

https://qunitjs.com/api/QUnit/module/

And more are coming in #1772.

For concurrency, perhaps we can use an option instead.

QUnit.module('example', { concurrency: boolean | number }, function ( hooks ) {
 QUnit.test();
 QUnit.test();
 QUnit.test();
});

Given that options duals as the seed for the test context object, this would potentially conflict with existing code. This is not unprecedented in that we already reserve before, beforeEach, afterEach, and after keys in the same object. But something to consider.

Another idea would be to add it to the hooks object. That avoids conflicts, and has the benefit of working within the simpler-looking pattern of only using the scope parameter. Passing three parameters feels more complex, with another thing to learn.

QUnit.module('example', function ( hooks ) {
  hooks.enableParallel();

  QUnit.test();
  QUnit.test();
  QUnit.test();
});

to have parallelism, you'd need to split test-running in to workers -- which would be useful as well -- and maybe easier from a context isolation perspective?

Yeah, that's something we can't really do in core, but something the CLI and runners like Testem could do. See previous discussion at "Split tests for parallelization". #947.

To do this well, without needing to have upfront awareness of the modules and tests, you'd want to deterministically shard and filter by moduleId. This has been requested in "Dynamically filter tests to run". #1183 as well.

from qunit.

Krinkle avatar Krinkle commented on August 15, 2024

Fixture reset. If we think parallel is likely to be useful in a browser, we may want to give each test its own fixture. That could involve providing an automatic DOM reference like "this.fixture" or "assert.getFixture()", or some other mechanism.

QUnit.config.current may also need some thought. There are still extension features for which this is the only documented way to access certain information. That's solvable. We could document assert.test as officially supported accessor, to make up for it.

from qunit.

Krinkle avatar Krinkle commented on August 15, 2024

If we limit this to the CLI, then a safer option would be to use sub processes indeed. A fairly straightforward approach could be to spread out by test file. This is more crude in that tests concentrated in one module wouldn't run in parallel. But this might be offset by the gain in true parallel main threads, and by having entire modules run concurrently.

from qunit.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.