Comments (13)
You might need to define update_properties
for Chrome c.f. https://github.com/web-platform-tests/wpt/blob/07a56dce8775820d3a42d4ac94beb39428f6a585/tools/wptrunner/wptrunner/browsers/firefox.py#L265-L267 and https://github.com/web-platform-tests/wpt/blob/07a56dce8775820d3a42d4ac94beb39428f6a585/tools/wptrunner/wptrunner/browsers/firefox.py#L55 It should be OK to make it just return [], {}
since you presumably don't care about combining the results from multiple run configurations (I think the factoring here is a little off; the properties used in the update should be configured in the updater, not linked to a specific browser, because really it depends on the target deployment e.g. if it's wpt CI or gecko CI or whatever).
from web-streams-polyfill.
@MattiasBuelens I would be very happy to see a PR for that against WPT. I'm not sure what the correct fix is, but I've never gotten the expectations stuff to work with ./wpt run
, and it sounds like you have! :)
from web-streams-polyfill.
No, not yet.
Right now, I'm using wpt-runner to run the tests on Node with jsdom, but that has it limits. For example: for #20, I want to take the browser's built-in streams implementation and polyfill only the newer (not yet natively implemented) features. Or perhaps I find that a browser doesn't yet implement a spec change, and I may want to monkey-patch certain properties and methods ahead of browser updates.
To test this, I need a way to run WPT in actual browsers with the polyfill loaded. It seems that wpt run
almost fits the bill, except I can't figure out how to make it load an extra script file before every test. All of the relevant streams tests are already in .any.js
format, so it'd suffice if it behaved like an extra // META: script=
rule (like this).
Would it be better if I opened an issue over at web-platform-tests/wpt?
from web-streams-polyfill.
Sure, feel free to open an issue at https://github.com/web-platform-tests/wpt/issues.
I would explore one of these things:
- If all of these tests already load a common helper, edit that helper to first inject your polyfill
- Otherwise, add your polyfill to testharnessreport.js
- Or just edit every test individually.
All of these require local modification. If you want something that can be run with ./wpt run
out of the box, maybe that's possible to build out once the concept has been proven to work.
from web-streams-polyfill.
I would have preferred an out-of-the-box solution, but I can understand that if that doesn't exist yet, it'd be risky to already commit to something without knowing if it would actually solve my problem. So sure, I'll start by hacking the WPT files locally first.
Thanks for the help!
from web-streams-polyfill.
Right, I'm guessing you might discover some new problems by just hacking it together which are worth knowing about early. For example, if you need to inject your polyfill at a very specific time, that might affect the shape of a solution.
from web-streams-polyfill.
All right, I got something working.
- Copy
dist/polyfill.es2018.js
totest/web-platforms-tests/web-streams-polyfill/polyfill.es2018.js
- Patch
tools/wpt/serve.py
(withintest/web-platforms-tests/
):
@@ -81,6 +81,7 @@
query = "?" + query
meta = "\n".join(self._get_meta(request))
script = "\n".join(self._get_script(request))
+ script = self._script_replacement("script", "/web-streams-polyfill/polyfill.es2018.js") + script
response.content = self.wrapper % {"meta": meta, "script": script, "path": path, "query": query}
wrap_pipeline(path, request, response)
- Run (on Windows):
cd test/web-platforms-tests/
python ./wpt run chrome "streams/readable-streams/tee.any.html" --binary "C:\Program Files\Google\Chrome\Application\chrome.exe" --yes --headless --no-pause-after-test
That runs the tests for ReadableStream.tee()
. And yes, they're all passing with this polyfill.
I can change this to wpt run chrome streams
to run all of the tests, but actually I only want to run a subset of the stream tests. For example, we don't support transferring a polyfilled ReadableStream
to a worker, so ideally I want to skip those tests. Similarly, we can't detach ArrayBuffer
s synchronously from user-land code, so some tests for .respond()
and .respondWithNewView()
are expected to fail.
Right now, I'm managing this with some custom filters. If I understand the documentation for wpt
correctly, the tool can generate a list of "expectations" with wpt update-expectations
which can then be fed back into wpt run
? Guess I need to do some more digging.
from web-streams-polyfill.
@foolip Sorry to bother you. Do you happen to know of any example usages of wpt update-expectations
, and how to feed that into wpt run
? Or perhaps other/better ways to specify which tests are expected to fail and/or should be skipped by wpt run
? I've been staring at the Chromium and Firefox source code, but there's just... too much going on there.
from web-streams-polyfill.
I'm afraid I don't know how to use that, it's not used in Chromium AFAIK. I think it's used by Gecko however and @jgraham might know how to invoke it.
from web-streams-polyfill.
So, I think no one uses the wpt command reguarly, gecko uses the functionality but with a different frontend.
But assuming your setup is pretty standard, wpt run --log-wptreport wptreport.json <product> <path>; wpt update-expectations wptreport.json
should work. You might want to create a metadata directory and use --metadata <path>
with both commands to ensure the metadata gets used.
from web-streams-polyfill.
Not having much luck...
PS C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests> python ./wpt update-expectations --product chrome streams.log.json --metadata mymeta
Processing log 1/1
Traceback (most recent call last):
File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\wpt", line 11, in <module>
wpt.main()
File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wpt\wpt.py", line 179, in main
rv = script(*args, **kwargs)
File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wpt\update.py", line 32, in update_expectations
updater.run()
File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\update\update.py", line 182, in run
rv = update_runner.run()
File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\update\base.py", line 63, in run
rv = step(self.logger).run(step_index, self.state)
File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\update\base.py", line 34, in run
self.create(state)
File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\update\update.py", line 98, in create
runner.run()
File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\update\base.py", line 63, in run
rv = step(self.logger).run(step_index, self.state)
File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\update\base.py", line 34, in run
self.create(state)
File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\update\metadata.py", line 24, in create
metadata.update_expected(state.paths,
File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\metadata.py", line 75, in update_expected
for metadata_path, updated_ini in update_from_logs(id_test_map,
File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\metadata.py", line 278, in update_from_logs
for item in update_results(id_test_map, update_properties, full_update,
File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\metadata.py", line 300, in update_results
updated_expected = test_file.update(default_expected_by_type, update_properties,
File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\metadata.py", line 776, in update
subtest.update(full_update=full_update,
File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\manifestupdate.py", line 286, in update
prop_update.update(full_update,
File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\manifestupdate.py", line 407, in update
property_tree = self.property_builder(self.node.root.run_info_properties,
File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\manifestupdate.py", line 309, in build_conditional_tree
properties, dependent_props = run_info_properties
ValueError: not enough values to unpack (expected 2, got 0)
I generated streams.log.json with:
python ./wpt run chrome "streams/readable-byte-streams/bad-buffers-and-views.any.html" --binary "C:\Program Files\Google\Chrome\Application\chrome.exe" --webdriver-binary "_venv3\Scripts\ch
romedriver.exe" --no-pause-after-test --log-wptreport streams.log.json --metadata mymeta
I'm on web-platform-tests/wpt@07a56dc.
Am I doing something wrong, or should I open an issue?
from web-streams-polyfill.
@jgraham Sorry for not getting back to you sooner. I can confirm that adding a dummy update_expectations
to chrome.py
does the trick! meta/
, and re-running wpt run
with --meta
set does indeed compare the new test results against those expectations.
Would you accept a PR for this in upstream WPT? I don't think it'd hurt to copy a generic update_properties
function like the one from servo.py
over to chrome.py
, no? Or is it better to keep it as a local patch for this polyfill only?
from web-streams-polyfill.
I decided not to use ./wpt run
in the end. Instead, I wrote my own test runner on top of Playwright and re-use the local web server from wpt-runner to host the tests. See #96 for more details if you're curious.
Thanks for your assistance anyway!
from web-streams-polyfill.
Related Issues (20)
- Use in a streaming library for nodejs and browsers HOT 6
- dom-exception.ts HOT 6
- closing a stream followed by respond(0) can error if close is not syncronously putting stream in closed state HOT 5
- `pipeTo` doesn't use `signal.reason` as error HOT 2
- Example for polyfilling TextDecoderStream in previous browsers HOT 2
- `pipeTo` with `preventCancel: true` never settle if `readable` doesn't produce new chunks HOT 2
- Upgrading to Typescript 4.9 results in error TS2322 when using web-streams-polyfill HOT 3
- Abilities to support web streams under IE8-? HOT 3
- signal with pre aborted HOT 5
- [question/help] tee:ing a async iterator HOT 6
- Ready to boost your popularity to like 22 mil download / week? HOT 14
- Upgrading to Typescript 4.4 results in error TS2345 when using web-streams-polyfill HOT 5
- ERR_UNSUPPORTED_DIR_IMPORT HOT 2
- Building with rollup ends with circular dependency warnings HOT 2
- `Cannot abort a stream that already has a writer` HOT 9
- "TypeError: Failed to execute 'pipeThrough' on 'ReadableStream'" HOT 4
- Issue with dynamic import HOT 3
- The stream is not in a state that permits enqueue; new Response(readable).blob().text() resolve to [object ReadableStream] not underlying source HOT 10
- ReadableStream and File/Blob HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from web-streams-polyfill.