Git Product home page Git Product logo

Comments (19)

svgeesus avatar svgeesus commented on August 20, 2024

Related:

from ift.

garretrieger avatar garretrieger commented on August 20, 2024

There's two types of caching to consider:

  1. Client side caching: the spec currently covers this. Using incremental transfer you'll still get the benefits of the client side cache since any data previously loaded for a particular font URL will be re-used in future pages.

  2. Server side caching: since the requests are stateless caching of responses is allowed. However, the cache key space necessarily is much larger than with a single font. This is a fundamental tradeoff of this approach. The good news is that intelligent server implementations can mitigate this. For example a server could figure out a superset(s) of codepoints that covers the most commonly requested codepoint sets, cache and serve those from a cache when possible. The spec leaves room in the server implementations to make such optimizations as they see fit. This is something I expect to see plenty of further improvements in as we gain more experience serving incremental fonts.

The spec currently does briefly talk about this in the performance considerations section. I can expand this section a bit more to also mention the caching aspect.

from ift.

garretrieger avatar garretrieger commented on August 20, 2024

Duplicate of #93

from ift.

svgeesus avatar svgeesus commented on August 20, 2024

@martinthomson we closed this as a duplicate, but you are tagged on the other issue as well.

from ift.

mnot avatar mnot commented on August 20, 2024

AIUI that issue was resolved by relying on redirects (which are horrible for performance) and QUERY (which doesn't exist yet, may not be cacheable in these use cases, and if it is, won't be implemented broadly for some time, if ever).

I don't think this issue is closed.

from ift.

garretrieger avatar garretrieger commented on August 20, 2024

Not mentioned in the other issue but caching will also be supported via GET requests so long as the response includes a "Vary: Font-Patch-Request" header. That said, direct caching of the responses is unlikely to be all that effective due to the large cache key space as you originally mentioned. Though it may be helpful in cases such as where there's a common initial request from a widely visited page. Instead it's probably better for caching to be done in the server implementation by matching incoming requests to a nearby superset of codepoints that the server has a cached response for. This is supported in the spec currently.

If having strong regional caching support that does not rely on caching in server implementation is a requirement for a particular use case then serving fonts via range request is likely a better fit. The range request approach provides very good support for caching.

from ift.

mnot avatar mnot commented on August 20, 2024

The HTTP WG has been exploring ways to improve Vary efficiency; the latest iteration (not yet adopted, but being discussed next week) is here.

from ift.

garretrieger avatar garretrieger commented on August 20, 2024

Thanks for pointing this out, I was not aware of it.

from ift.

garretrieger avatar garretrieger commented on August 20, 2024

Reopening as there's a couple of additions we could make to the spec:

  • Add text to specification to include the "Vary" header in requests.
  • Add recommendation of how server side caching could be done in an implementation.

from ift.

mnot avatar mnot commented on August 20, 2024

You need to use Vary in responses, not requests.

However, that's a bare miniumum for safety only - it's unlikely you'll get many cache hits because of the extremely dynamic nature of the requests.

Architecturally, it'd be much better to cast this as a new range-unit, so that it reuses the framework that range requests offer (which are semantically very similar to what youre doing). E.g.,

Range: fontpatch=[base64 here]

... with the response being a 206, which clearly marks it as a partial response to intermediaries, thereby making it more likely to not be erroneously reused.

Recommending how server-side caching could be done isn't helpful; have you engaged with any CDNs to see how likely they are to implement this?

from ift.

garretrieger avatar garretrieger commented on August 20, 2024

That's an interesting idea. I hadn't realized that range units are extensible. It seems the current range specification allows for us to have a pretty arbitrary range identifier (specifically other-range).

At first glance I don't see any reason why we couldn't switch to using a custom range unit specific to font subsets instead of the font-patch-request header. Though I will have to spend some time reviewing the relevant HTTP range-request specs to make sure what we are trying to do would be a good fit for that framework.

Also, it's probably worth mentioning that we've recently reworked the specification to work within the compression-dictionary-transport framework. The relevant part for this issue is that the response (after content-encoding has been decoded) is now a valid font subset file (where previously the response was a patch that needed further decoding). The patching part is now handled as part of the content-encoding.

With this in mind I think we end up with something like this:

  • define a custom range unit "fontsubset" that identifies a particular subset of a font that should be loaded.
  • Use the "sec-available-dictionary" header to describe a font subset that the client already has for the purposes of forming the shared brotli encoding of the response.

What do you think?

from ift.

mnot avatar mnot commented on August 20, 2024

That sounds interesting, but we'd need to carefully consider the interaction of content encoding and range requests (which have never played very nicely together).

Also, it seems like you're doing something very different than compression dictionary transport. There, the dictionary is a separate resource on the server, identified by a URI and relatively static. Here, the dictionary is the current state of the client's local cache (effectively). So (if I understand the proposal correctly) I'm wondering how much reuse you'll actually get beyond syntax -- keeping in mind that we often find trouble happens when protocol syntax is resued but semantics diverge.

I was thinking about reusing ranges because it seems to me that you could encode the entire patch request into the range identifier. It's not particularly elegant, but it is in keeping with how ranges work, conceptually.

from ift.

garretrieger avatar garretrieger commented on August 20, 2024

I’ve spent some time reviewing the specs relating to range request and unfortunately according to my interpretation putting the font patch request as a range request probably won’t be a good fit. Overall there’s an assumption that runs through the existing specification that the resource is divided into some number of units and the result of the request is some subset of those units. This manifests in a couple of places which would cause issues when trying to utilize this for font subsets which are not formed of as a set of units from the original font resource:

  1. Content-Range: while this does allow the new custom range unit to be used, it currently limits the actual range values to be integers (excluding the use of other-range here). Furthermore those integers must adhere to rules like first < last < total. With a custom font subset range we wouldn’t be able to meaningfully populate these. Content-Range is required in responses with status code 206.

  2. The requirements for status code 206 make the assumption that range response should be combinable. Which isn’t easily achieved with font subsets ( you can only combine subset responses where one is a superset of another).

  3. To echo what you said it appears that the interaction of content-encoding and range requests is ill defined. My reading of the spec is that range request operates after content-encoding has been applied, since content-encoding is part of the selected representation.

That said, these are not necessarily insurmountable but I’m currently leaning towards sticking with using the “font-patch-request” + “vary”. Also after further thought I think it’s best to keep the entirety of the font-patch-request message in one place instead of splitting part of it out into the compression dictionary transport header (so sticking with how we currently have it specified).

Also, it seems like you're doing something very different than compression dictionary transport. There, the dictionary is a separate resource on the server, identified by a URI and relatively static. Here, the dictionary is the current state of the client's local cache (effectively). So (if I understand the proposal correctly) I'm wondering how much reuse you'll actually get beyond syntax -- keeping in mind that we often find trouble happens when protocol syntax is resued but semantics diverge.

The compression dictionary transport specification specifically allows past versions of a resource to be used to encode future versions (see delta compression under use cases). I’ve been in close contact with the folks developing the compression dictionary transport proposal and they’re OK with its use for IFT. The IFT spec needs a couple of updates to sync up with the latest version of the proposal but the plan is to fully follow the semantics laid out in the proposal.

I’ve implemented a prototype in Chrome of incremental transfer which utilizes the separate prototype compression dictionary transport implementation so I can confirm it’s possible to re-use the generic compression dictionary transport mechanism as part of a client side IFT implementation. It works roughly like this:

  1. Initially there’s no existing dictionary for an incremental font transfer URL. The client forms the font-patch-request based on what codepoints it needs and sends the request + font-patch-request header.

  2. Server responds with the appropriate font subset, content-encoded using brotli/gzip (or some other standard encoding) and includes the use-as-dictionary header with the match field set to only match the full path for the font.

  3. Browser stores the decoded response for use as a dictionary in the future.

  4. At some later time the browser decides it needs more codepoint coverage in the font. It first checks for any existing dictionaries if one exists the font-patch-request is formed taking into account the existing dictionary + whatever additional codepoints are needed.

  5. Request is sent with a font-patch-request header and a sec-available-dictionary header.

  6. The font-patch-request header contains sufficient information for the server to reconstruct the dictionary identified by the hash in sec-available-dictionary. From there the shared dictionary compressed encoding can be created and delivered to the client. Again including the use-as-dictionary header to allow future requests to use the updated font as a dictionary.

“font-patch-request” communicates two things:

  1. Primarily it specifies to the server that the client would like an alternate version (selected representation) of the underlying font resource which contains at minimum the data needed to support the requested font subset (union of *_needed and *_have members).
  2. Via the *_have members it provides a hint to the server on how to recreate a dictionary which matches “sec-available-dictionary”. In theory a server could ignore font-patch-request (for the purposes of finding the dictionary) if it maintained a cache mapping from the hashes to the dictionary file for dictionaries it has previously sent out. Ultimately sec-available-dictionary is authoritative in identifying the dictionary that can be used. The hint is provided as an optimization so that it’s not necessary for a server to retain all dictionaries it’s ever generated since it allows the dictionary to be recreated if needed. In practice I suspect server implementations would likely use a mixture of a bounded cache of frequently occurring dictionaries + generating on demand when not present.

Note: if the dictionary can’t be found/recreated then the server will respond with the requested font subset from (1) but will not use “sbr” encoding and everything will still work as normal.

from ift.

martinthomson avatar martinthomson commented on August 20, 2024

I don't see why Unicode codepoints can't be used for range units. That's what @font-face does, at least within a single variant. Maybe that isn't enough if you consider different variants, variable fonts, and whatnot, but range expressions would seem to at least be a plausible option.

The superset requirement for combination is a little surprising to me.

The delta encoding stuff is maybe OK, but it seems like you might have some difficult with non-linearity when clients have already made some number of partial requests and have synthesized something from those requests. Maybe it can be made to work, but it would be extremely fragile.

That said, delta encoding seems like a great idea for simpler scenarios, like the case where you start with Latin script and want to expand in some way from there. That's a case where you might just expand iteratively, either from a baseline (Latin) or what you already have (Latin + Math, Latin + Greek, Latin + Line Drawing, Latin + Emoji Subset 1, etc...).

from ift.

garretrieger avatar garretrieger commented on August 20, 2024

I don't see why Unicode codepoints can't be used for range units. That's what @font-face does, at least within a single variant. Maybe that isn't enough if you consider different variants, variable fonts, and whatnot, but range expressions would seem to at least be a plausible option.

A font subset definition is currently made up of a set of unicode codepoints, the variable axis space, and the set of layout features being requested. So using just codepoints doesn’t fully capture what is covered by a response. The other issue is that “content-range” can specify only one continuous range of units. To have more than one range the range request specification currently has this encoded as a multi part response which doesn’t fit with incxfer which always uses a single part response.

The superset requirement for combination is a little surprising to me.

In fonts there are various mechanisms which associate data with combinations/sequences of codepoints. A common example is the “fi” ligature where if text has a ‘f’ followed by an ‘i’ it will be substituted with a special “fi” glyph. Now consider a case where you have two subsets where one contains ‘f’ and the other contains ‘i’. Neither subset would contain the ‘fi’ ligature since it’s not reachable. If you tried to combine those two subsets then the merged font won't render the same as the original font on account of the “fi” ligature glyph being missing. This is one of the main shortcomings of the unicode-range approach to serving fonts.

The delta encoding stuff is maybe OK, but it seems like you might have some difficult with non-linearity when clients have already made some number of partial requests and have synthesized something from those requests. Maybe it can be made to work, but it would be extremely fragile.

Yes, a bit of care needs to be taken here but in my prototype in Chrome this didn’t end up being too difficult:

  1. Since there’s one process per tab, I can ensure that for a single tab there’s only one inflight augmentation request for a specific URL at a time. While a request is in progress if new codepoints are encountered they are queued up for loading once the current request finishes.
  2. However, if there’s multiple tabs that are augmenting the same URL under the same cache partition key then it’s possible you could get multiple augmentations in flight at the same time.
  3. Each request while inflight keeps a reference to the specific dictionary it was made relative to. So that way even if there are multiple inflight requests they will all be able to successfully decode.
  4. Finally, only one copy of the dictionary is retained per URL, so if there are multiple inflight requests whichever one finishes last will set what the current dictionary is.
  5. Now for the process(es) that lost the race and didn’t have their version of the dictionary persisted they can still perform augmentations since any future requests will be made relative to whatever dictionary was persisted. The cost here is that some codepoints may need to be re-requested if they weren’t in the dictionary that did get persisted.

This of course is just an example of how it could be done. There’s other approaches such as having the network process coordinate requests and ensure only one is inflight at a time across tabs. For our implementation we decided the added complexity is not worth the small downside of potentially re-requesting data for what should be a relatively infrequent occurrence.

That said, delta encoding seems like a great idea for simpler scenarios, like the case where you start with Latin script and want to expand in some way from there. That's a case where you might just expand iteratively, either from a baseline (Latin) or what you already have (Latin + Math, Latin + Greek, Latin + Line Drawing, Latin + Emoji Subset 1, etc...).

The IFT spec is pretty open ended about what the server is allowed to do. The only requirement is the responses contain at least what was asked for. So this type of approach is absolutely something that can be done and likely makes a lot of sense for scripts which don’t have large codepoint counts (ie. non CJK, emoji, icon fonts).

From a server implementation perspective I think it would be pretty reasonable to define a pretty compact latin core (basically just ascii) and then several extended latin subsets for various latin based scripts (for example Vietnamese and sets for European languages which need specific diacritic sets). Outside of latin you could do similar things for other languages/scripts. Once you have these defined the server could always augment in units of the defined subsets based on what codepoints have been requested.

This would give performance that’s better than a unicode-range based solution in use today (by way of having tighter subsets and not duplicating data between subsets) while avoiding breaking rendering across subsets. All the while getting it done in a pretty similar number of font loads.

from ift.

garretrieger avatar garretrieger commented on August 20, 2024

#153 adds "Vary" to the response.

from ift.

garretrieger avatar garretrieger commented on August 20, 2024

Some updates on this post TPAC. I've proposed an alternative version of IFT where the references to patches are embedded in the font file (see: https://lists.w3.org/Archives/Public/public-webfonts-wg/2023Sep/0003.html). Most importantly this eliminates the dynamically constructed patch request message and associated custom header in favour of using regular old URLs pointed to by a mapping in the font file.

This would allow fully statically hosted implementations (and hence easy cacheabilty) while still leaving the door open for dynamic implementations if desired.

We're also currently exploring the possibility of merging this new approach and the IFTB approach into a single unified IFT mechanism.

from ift.

svgeesus avatar svgeesus commented on August 20, 2024

We're also currently exploring the possibility of merging this new approach and the IFTB approach into a single unified IFT mechanism.

This has now been done, so the whole "produce a complete font in response to a query " issue is no longer applicable.

@garretrieger what do you think, close?

from ift.

garretrieger avatar garretrieger commented on August 20, 2024

For reference, here's a early draft of the new approach: https://garretrieger.github.io/IFT/Overview.html

This allows the patches to be hosted as regular files and uses no special headers/http extensions. So cacheing now works normally.

from ift.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.