youtube / spfjs Goto Github PK
View Code? Open in Web Editor NEWA lightweight JS framework for fast navigation and page updates from YouTube
Home Page: https://youtube.github.io/spfjs/
License: MIT License
A lightweight JS framework for fast navigation and page updates from YouTube
Home Page: https://youtube.github.io/spfjs/
License: MIT License
Add a configuration option that takes an array of querystring parameter names (["foo"]
in this case). On navigation, persist the parameter and its value(s).
It's unlikely but possible to have multiple values for a parameter. For example: "example.com/?foo=bar&foo=bar2". Navigating to /baz/ should persist both values of "foo".
If the parameters listed in the config are not present, nothing will be persisted across navigation. For example: "example.com/?abc=def" will navigate to "example.com/baz/" normally.
An advanced version of this could be a whitelist not only of parameter names but also values. For now, I'll just implement names.
Update the docs accordingly.
SPF is currently being compiled with Closure Compiler v20131014 to maintain compatibility with Java 6. We should upgrade to the latest version, which will require Java 7.
There are a few tests which check for the existence of features before executing (such as event listeners in history_test.js and Object.keys in cache_test).
Since this mostly reflects a common base level of supported browsers, we should see if we can move that logic into a higher level within jasmine.
In order to use the navigation features of spf, the browser must support the pushState history api. We should include this info in the README as well as a link to http://caniuse.com/#feat=history.
I can reproduce this in FF,Opera and Chrome.
However, if before step 3, I right click on the video link -> Inspect element -> Remove the "spf-link" from the 's class attribute, the video loads fine.
Running make on the master fails with the following output
Makefile:71: target `vendor/ninja/ninja' given more than once in the same rule.
[1/2] unzip vendor/closure-compiler/compiler-20140625.zip -> vendor/closure-compiler
Archive: vendor/closure-compiler/compiler-20140625.zip
[2/2] jscompile build/spf.js
src/client/cache/cache.js:166: ERROR - variable key is undeclared
for (key in storage) {
^
1 error(s), 0 warning(s)
It succeeds in commit e272bad51b0318a097dd6871bd325e00301c5f92
from 1st September, so
I am guessing its a recent regression.
Currently SPF is delivered in the traditional format of a compiled binary wrapped in an anonymous function, with exports being assigned to the global level via the window variable.
While there's nothing wrong with the current system, but if someone were to want to load SPF via an AMD (e.g. RequireJS) or CommonJS loader, it would require a shim config.
Wrap the compiled binary in a UMD-style format like that defined at https://github.com/umdjs/umd/blob/master/commonjsStrict.js for broader compatibility.
The source for JS API documentation is src/api.js
. We should generate documentation for the API on the website from that file.
We (eBay) are doing some profiling on spf navigation and noticed a strange behavior on the 5th navigation. We calculate the SPF load time as timing.spfProcessFoot - timing.startTime
on the spfdone
event.
For a fresh navigation, the load times are reported correctly (roughly 100ms). But on navigating to a previously navigated link the load time reported is very high (more than 20000ms), although the page loaded quickly as previous navs.
We also noted on navigating to a previously navigated link, the timing object is augmented with lot of extra timing information, which was not present initially. Do you know whats happening? Is this a known bug?
To provide more (and better) information, create a GitHub Pages website that will contain expanded documentation, examples, etc, beyond what the README can contain.
The current event and callback system is as follows:
1. {handle click} -or- {handle history}
2. {begin navigation}
3. {send request} -or- {promote prefetch}
dispatch("spfrequested")
{if canceled, redirect}
4. {for multipart: receive part}
dispatch("spfpartreceived")
{if canceled, redirect}
5. {for multipart: process part}
callback("onPart")
{if canceled, redirect}
dispatch("spfpartprocessed")
{if canceled, redirect}
6. {receive response}
dispatch("spfreceived")
{if canceled, redirect}
7. {process response}
callback("onSuccess")
{no cancellation}
dispatch("spfprocessed")
{no cancellation}
8. {at any time: handle error}
callback("onError")
{if canceled, ignore error}
dispatch("spferror")
{if canceled, ignore error}
{redirect}
The problems with the current system are:
To solve these, I propose the following system instead:
1. {handle click} -or- {handle history}
dispatch("spfclick") -or- dispatch("spfhistory")
{if canceled, ignore click/history}
2. {begin navigation}
dispatch("spfnavigated")
{no cancellation}
callback("onRequest")
{if canceled, redirect}
dispatch("spfrequest")
{if canceled, redirect}
3. {send request} -or- {promote prefetch}
dispatch("spfrequested")
{no cancellation}
4. {for multipart: receive part}
dispatch("spfpartreceived")
{no cancellation}
callback("onPartProcess")
{if canceled, redirect}
dispatch("spfpartprocess")
{if canceled, redirect}
5. {for multipart: process part}
dispatch("spfpartprocessed")
{no cancellation}
6. {receive response}
dispatch("spfreceived")
{no cancellation}
callback("onProcess")
{if canceled, redirect}
dispatch("spfprocess")
{if canceled, redirect}
7. {process response}
dispatch("spfprocessed")
{no cancellation}
callback("onSucess")
{no cancellation}
dispatch("spfsuccess")
{no cancellation}
8. {at any time: handle error}
callback("onError")
{if canceled, ignore error}
dispatch("spferror")
{if canceled, ignore error}
{redirect}
This solves the above problems:
This means that the chain of events for the simple case would look like:
Event | Callback | Cancel Result |
---|---|---|
spfclick | onClick | Ignore |
spfnavigated | ||
spfrequest | onRequest | Redirect |
spfrequested | ||
spfreceived | ||
spfprocess | onProcess | Redirect |
spfprocessed | ||
spfsuccess | onSuccess |
The success event/callback is the only exception to the present/past tense rule,
but maybe we can rename it.
Why do we need to install Python and Java just to configure this JavaScript library?
Especially Java is a heavy requirement just for setting up this for our website.
Why not use Node.js? Like most of the newer web development / design tools do? I.e., Yeoman, Bower, Google Web Starter Kit, Ghost, Brackets...
When i try to run the server using app.py, i am getting 404 for the /static/dev-spf-bundle.js
And also chrome is showing "Your browser does not fully support SPF." for http://localhost:8080/
Response processing is currently:
title
— Update document titleurl
— Update document urlcss
— Install page-wide stylesattr
— Set element attributeshtml
— Set element content and install element scripts (styles handled by browser).js
— Install page-wide scriptsIn a standard response, styles and scripts can be placed anywhere. In the current SPF response, styles cannot be placed in the foot and scripts cannot be place in the head. While this generally "best practice", executing scripts early is also a common need (e.g. async script loading, google analytics, web font loading, etc). The current SPF behavior diverges from standard behavior, which is unexpected.
To make this more uniform, update processing to install page-wide scripts and styles in both steps 3 and 6. To make this more clear, rename css
to head
, html
to body
, and js
to foot
. Response processing would then be:
title
— Update document titleurl
— Update document urlhead
— Install early page-wide scripts and stylesattr
— Set element attributesbody
— Set element content and install element scripts and stylesfoot
— Install late page-wide scripts and stylesAlso, whereas before we let the browser automatically install and uninstall styles that occur in body
fragments, since scripts are not natively supported, we parse and execute them. This activates SPF's version handling and execution logic for scripts, whereas styles don't get this. Make this consistent by treating styles in body
fragments in the same way as we would in head
or foot
fragments.
Currently, preconnecting to URLs early in navigation or before navigation begins can be done via ad hoc JS. The benefit is to resolve DNS and establish the socket for the connection early, before the request is made, reducing the time it takes to make the request. Support this functionality in a standardized way.
We should have contribution guidelines in a CONTRIBUTING
file that outline the basic process and requirements for submitting patches.
We should sign up with travis-ci.org or some other continuous integration platform in order to display the current build status. We should always be green, but it is good to have that information displayed in the README.
There should also be some sort of web-hooks that run tests on pull requests and then comment on the pull request to determine if the pull request can be merged.
As noted in the review for #177, we should simplify updates to the downloads page so that it always links to the latest release.
The general plan for SPF distribution is:
For bower or other package managers or build tools, we will provide a list of source files in dependency order.
To facilitate release distribution, add tooling to automate creating releases, tagging git revisions, and generating source lists.
Currently scripts and styles are executed according the following rules:
<style>
or inline <script>
tag, execute unconditionally by appending to document.<link>
or external <script src>
tag, only execute if a style/script has not already been executed with that same URL.<link>
or external <script src>
tag with a name
attribute, remove all other styles/scripts with that same name after executing.In a standard response, scripts and styles are always unconditionally executed. SPF changes this behavior to reduce or eliminate duplicated work. However, SPF avoids executing scripts and styles inconsistently, since inline and external tags are treated differently. Furthermore, this behavior diverges from standard behavior, which is unexpected.
Make the current SPF behavior opt-in by requiring a name
attribute and extend it to inline tags as well. For inline scripts and styles, uniqueness would be determined by a hashcode of text content after removing whitespace. For external scripts and styles, uniqueness would continue to be determined by the URL.
This would change the script and style execution rules to:
<style>
, inline <script>
tag, external <link>
, or external <script src>
tag with a name
attribute, only execute if a style/script has not already been executed that matches the URL or text content. Remove all other scripts and styles with the same name after executing.When deferring task execution to an external scheduler, we need to ensure that we have error handling:
spf.execute
can help this.Right now, the SPF website doesn't use SPF navigation. Fix that.
The default way for SPF (dynamic) requests to be distinguished from their traditional (static) counterparts is via a URL identifier. This string is appended to URLs before the request is sent and can be changed via the url-identifier
config setting.
SPF used to support sending a HTTP header to provide servers an alternate system for negotiation between the request formats. We removed this in 064ad3a because it easily leads to the browser displaying the SPF response (JSON) instead of the full response (HTML) when navigating back from a page that does not support SPF, if the URL was the same for both responses (i.e. if the url-identifier
was set to null or an empty string).
In https://groups.google.com/d/topic/spfjs/MJQBqZ0Jtpk/discussion we discussed the possibility of re-enabling this support. The key issue is how browsers and intermediate caching proxies will determine if they should reuse a previous response. If a URL does not uniquely identify a response (including the format, e.g. JSON vs HTML), then the request headers must also be taken into account, and the server should set the Vary
response header. From http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.44:
The Vary field value indicates the set of request-header
fields that fully determines, while the response is fresh,
whether a cache is permitted to use the response to reply to
a subsequent request without revalidation. For uncacheable or
stale responses, the Vary field value advises the user agent
about the criteria that were used to select the representation.
In the current implementation, requests are:
GET /path?spf=navigate
Responses have no requirements.
To enable the server-side negotiation between the static and dynamic requests, requests could be:
GET /path
Accept: application/json
X-SPF-Request: navigate
Then, the response sent by the server should include the Vary
header:
Vary: Accept
Note: The important thing is for the SPF request to have a different Accept
header than the default used by the browser. A value of application/json
should satisfy this requirement. A list of defaults used by various browsers can be found at https://developer.mozilla.org/en-US/docs/Web/HTTP/Content_negotiation.
Because using this server-side negotiation places an extra restriction on developers, and because the Vary
header is not as widely used/understood (or even sometimes not well supported: http://blogs.msdn.com/b/ieinternals/archive/2009/06/17/vary-header-prevents-caching-in-ie.aspx), we should be sure in indicate that using this system is an "advanced" option.
For situations where a hub page is repeatedly being hit from a back button, its cache entry could be cleared before the nav limit is hit (it would be the oldest entry).
As noted in #6 back in July,
As of right now, we expect server-side integration work to be needed to use SPF,
but a purely client-side mode is on the roadmap (at the expense of some efficiency
and advanced features).
This issue will track supporting the purely client-side version of transitions, which could also be called "document" mode. This mode will not require an alternate JSON response for transport and instead will allow transitions using the same HTML response as a traditional page load. While this mode will prevent use of some features (e.g. multipart responses) and is not as flexible/efficient for some use cases (e.g. client-side templating), it will have a lower barrier to entry.
When we prefetch CSS resources, we update the status value to say that the resource is already loaded. This prevents all future requests to that resource from triggering.
SPF triggers important events that would be nice to have handy on README.md
There are situations where the application knows that state has changed in a way which would make cached pages not useful. We should provide an API to invalidate specific cache entries (by URL?), or the entire cache.
Great library, guys! I'm having a bit of trouble understanding it though. I think there may be a mistake in the README file.
In the server-side section, it's not completely clear to me how the JSON response from the server relates to the existing page DOM. How does the "foot" attribute in the response work? There is no tag, or a div called #foot. Similarly, in the body object of the JSON response, should the attribute inside have a key of "#content"?
Sorry for the dumb questions. Can't wait to try this out on lump.co!
Currently, the set of output files is:
bootloader.js
debug-bootloader.js
debug-spf.js
spf.js
tracing-bootloader.js
tracing-spf.js
We should standardize this to the following pattern: target{-modification}.js
.
I also propose that we shorten some of these names:
New | Old |
---|---|
boot |
bootloader |
trace |
tracing |
That would mean the set of output files would change as follows:
New | Old |
---|---|
boot.js |
bootloader.js |
boot-debug.js |
debug-bootloader.js |
boot-trace.js |
tracing-bootloader.js |
spf.js |
spf.js |
spf-debug.js |
debug-spf.js |
spf-trace.js |
tracing-spf.js |
After #74 is complete and the release workflow is complete, we should add (or update) the tooling to facilitate distribution to CDNs in addition to npm.
Currently CSS loading isn't blocking to SPF response processing as support is a bit iffy. Desktop support is decent, but mobile is a bit less thorough. It makes sense to be safer by default, but for sites with more CSS variation, we should allow blocking as an option.
Desktop support:
Chrome 19, Safari 6, Firefox 9, Opera and IE 5.5 are supported.
In review for #46, @DavidCPhillips raised the issue that the current unit tests for script and styling loading only cover the expected cases:
Regarding resource_test.js
:
These pretty much only hit the happy path cases. It'd be nice to see
some tests for name mismatches (existing url and new name or vice
versa) and multiple urls that are already partially loaded.
Regarding script_test.js
:
On the other hand, these are now pretty trivial functions. I, for one,
wouldn't mind removing their tests altogether.
There are certain situations where SPF may want to defer some of it's execution while the application processes part of the page. This is especially important in cache hits where all the parts are available immediately with no network breaks.
The proposed scheduler would be used within the task queue when a queue is run or resumed and the next task is being executed.
The first implementation would only need 2 basic apis
scheduler.addJob(fn, opt_priority)
scheduler.removeJob(key)
Certain syntax errors are just causing our tests to skip instead of actually failing. #117 is a example (fixed in #120) where the syntax error caused all resource_test tests to skip.
We need to either have some compilation on tests or some more general test failure handling which will find errors such as these.
When a URL has a hash component (e.g. "/my_stuff#my_hash_component"), SPF should render the new page, then navigate directly to the element with ID #my_hash_component.
The resource and response tests rely on a FakeElement object that has its own implementation of DOM methods like appendChild and insertBefore. This is fine for the common case, but edge cases like older version of Internet Explorer may have quirks that are not reflected in the FakeElement implementation. This masks bugs that would otherwise fail the tests.
These tests should instead rely on the native DOM and its methods.
When handling clicks on enabled links, SPF will cancel the browser's default static navigation and attempt to perform dynamic navigation.
This is true for any URL, and currently, when navigating off site, SPF relies on the browser's same-origin security policy to throw an error, either from the History or XHR API. This error then stops dynamic navigation and triggers a full reload to the intended destination.
This process can be streamlined by ignoring links to pages with different domains and allowing the default static navigation to occur immediately.
For sites using CORS to perform cross-domain XHR, we could provide a domain whitelist config.
Currently, SPF ignores navigations to the same page. While this is an optimization, it breaks the expected behavior provided by browsers. For example in Chrome, if you click a link that leads to the page you're already on, it reloads the page. Doing this should not add another history entry.
The logic in net.resource.create to remove unnecessary DOM elements after load, was made a little bit too broad. It now removes styles as well, which render the loading pointless.
Would it be possible to provide spf.js in the repo? If this tool consists only of the JavaScript file, there's no reason to require Python + Java + make
command (Mac/Linux only) to build it, or?
In api.js, spf.process
is defined as follows:
spf.process = function(response) {};
In main.js, spf.process
is mapped to spf.nav.response.process
, and in response.js, spf.nav.response.process
is defined as follows:
spf.nav.response.process = function(url, response, opt_callback, opt_navigate,
opt_reverse)
This inconsistency needs to be resolved.
Currently, there is no built-in support for automatically triggering a page reload via a response attribute. A goal of SPF is to support seamless transitions across revisions (both for user experience and to avoid increased QPS). However, as discussed in https://groups.google.com/d/topic/spfjs/e3cchhIYu5Q/discussion, forcing reloads during roll-outs can be desirable, and we should add support for it (the overhead is low).
For now, this behavior can be emulated with a response like the following:
{
"foot": "<script>window.location.reload();</script>"
}
Currently, a couple dependencies are managed via git submodules. This breaks if the repo is downloaded via a zip, etc, and make
will fail.
We should also have a nicer error message if python isn't installed.
The position of stylesheets in the determines the cascade of styles. When a stylesheet needs to be reloaded by SPF, for example when a new version is available, the new stylesheet is appended to the . This alters the cascade and may give precedence to unexpected styles.
Instead, the new stylesheet should take the place of the old stylesheet in terms of DOM positioning. This could be done by updating the href attribute of the old stylesheet's . This ensures that the cascade doesn't change on reloads.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.