Git Product home page Git Product logo

Comments (22)

sftim avatar sftim commented on July 28, 2024 1

BTW https://kubernetes.io/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/#why-isn-t-there-a-stable-list-of-domains-ips-why-can-t-i-restrict-image-pulls is a better link for “don't depend on any implementation details”.

from registry.k8s.io.

BenTheElder avatar BenTheElder commented on July 28, 2024 1

AFAICT the simple per-cloud-run-region regionalizing approach is working well, based on logs etc.

For example pulling from the California Bay area, I am redirected to GCP us-west2 Artifact Registry (Los Angeles) and AWS us-west1 S3 bucket (N. California).

We can revisit cloudfront later, but I don't think we need to rush.

We might want to consider adding more S3 regions, notably South America where we have cloud run / artifact registry but no AWS presence kubernetes/k8s.io#4739 (comment)

from registry.k8s.io.

dims avatar dims commented on July 28, 2024

cc @BenTheElder @thockin @ameukam

from registry.k8s.io.

thockin avatar thockin commented on July 28, 2024

SGTM on principle.

from registry.k8s.io.

BenTheElder avatar BenTheElder commented on July 28, 2024

Yeah:

  1. Doing this is an obvious next step now that we have a real budget from amazon. We've been discussing that since last year following the announcement in the SIG meeting and other forums.
  2. Adding the GCP ranges is relatively easy, however matching them separately is inefficient, so we should refactor a bit.

The main detail that needs settling is how we handle routing non-AWS users to AWS.

@ameukam suggested perhaps we should just go ahead and switch to cloudfront.
Otherwise I think we should probably need to switch from IP => region to cloud run instance => region, and figure out and configure a reasonable mapping.

from registry.k8s.io.

dims avatar dims commented on July 28, 2024

@BenTheElder switching to cloudfront sounds like a good quick win, lets do that and leave the other suggestion for a longer time frame. (switching to cloudfront sounds like a reversible choice)

from registry.k8s.io.

sftim avatar sftim commented on July 28, 2024

Are we switching layer serving to CloudFront, or https://registry.k8s.io/ itself? The first option is straightforward but doesn't cut the GCP bill.

To tell the truth, I'm not sure how the second option helps the GCP bill either.

from registry.k8s.io.

sftim avatar sftim commented on July 28, 2024

Perhaps we're thinking of using AWS (and CloudFront) to serve the lot, and not use GCP at all?

from registry.k8s.io.

thockin avatar thockin commented on July 28, 2024

from registry.k8s.io.

BenTheElder avatar BenTheElder commented on July 28, 2024

@ameukam was suggesting migrating to cloudfront for layer serving instead of regionalizing to s3 ourselves (which we currently do by mapping AWS client IP known region to nearest-serving S3 region).

We either have to do that or otherwise update how we regionalize to work with non-AWS users, as a prerequesite to "default layer serving to Amazon".

As Tim said, serving content blobs is the only expensive part.

The option to regionalize by assigning a default S3 bucket per cloud run region is potentially less work than spinning up cloudfront, depending on who's working on it. It doesn't require new infra but the mapping would take some thought.

from registry.k8s.io.

sftim avatar sftim commented on July 28, 2024

Ah, right. Then for layers we either:

  • serve direct from AWS for clients already inside AWS, and use a CloudFront distribution for the rest of the world (low-cost price class / backed by a cheap region)
  • serve everything through CloudFront (low-cost price class / backed by a cheap region)
  • serve everything through CloudFront (low-cost price class / backed by a cheap region), except for clients inside GCP
  • one of the above three options with an extra fallback to GCP in case S3 is offline

Serving directly from S3 for clients inside AWS has benefits (that mainly accrue to the client) - for example, they can use a gateway-type VPC endpoint for image pulls and avoid using the public internet. Switching away might merit a notification that people who relied on this property now cannot.

from registry.k8s.io.

ameukam avatar ameukam commented on July 28, 2024

serve everything through CloudFront

This option is the one we need to go with. we don't want to deal with specific use cases that will increase our operational burden. For users with specific requirements we will suggest to have a local mirror.

from registry.k8s.io.

sftim avatar sftim commented on July 28, 2024

OK; I do think we should announce the change though. We don't need to add a wait period, because we already told people not to rely on implementation details.

from registry.k8s.io.

BenTheElder avatar BenTheElder commented on July 28, 2024

one of the above two options with a fallback to GCP in case S3 is offline

We have to do this not just because S3 offline (seems unlikely anyhow) but the more common problem that async layer population hasn't happened yet. Synchronous promotion to AWS has not landed in the image promoter / release process.

This part is already implemented. https://github.com/kubernetes/registry.k8s.io/blob/main/cmd/archeio/docs/request-handling.md

serve everything through CloudFront

I think people here are conflating sticking cloudfront in front of the entire service, which I do not agree with and had not been suggested previously, as opposed to sticking cloudfront in front of the layer store, it doesn't make technical sense when registry.k8s.io itself is serving nothing* but redirects.

We should look at cloudfront for the layer hosting.

from registry.k8s.io.

sftim avatar sftim commented on July 28, 2024

I amended #143 (comment) to clarify

from registry.k8s.io.

BenTheElder avatar BenTheElder commented on July 28, 2024

Also, in the future we'll want to do different cost routing (say we start to also use fastly, or azure), which is easier to do if it's just updating the redirect logic.

from registry.k8s.io.

sftim avatar sftim commented on July 28, 2024

/retitle Serve container image layers from AWS by default (make exception when clients are from Google)

from registry.k8s.io.

BenTheElder avatar BenTheElder commented on July 28, 2024

Serving directly from S3 for clients inside AWS has benefits (that mainly accrue to the client) - for example, they can use a gateway-type VPC endpoint for image pulls and avoid using the public internet. Switching away might merit a notification that people who relied on this property now cannot.

That's an interesting point, though our stance so far has very much been that:

  1. We don't have a mechanism to notify all users.
  2. Users may not depend on any implementation details, only OCI compliance https://github.com/kubernetes/registry.k8s.io#stability is at the top of the README that https://registry.k8s.io redirects to and outlines this in more detail.
  3. Due to 2) we don't need a mechanism to notify all users / we are understaffed and funded to manage this.

This sort of detail is what prevented us from redirecting k8s.gcr.io and bringing our costs down immediately, we cannot dig ourselves back into that hole.

If anything we should make "breaking" changes to those depending on implementation details more often (e.g. perhaps renaming the buckets) to underline the point that they're just implementation details and we will use whatever we can fund.

from registry.k8s.io.

hh avatar hh commented on July 28, 2024

Similar changes in the wider community and their communication: https://support.hashicorp.com/hc/en-us/articles/11239867821203?_ga=2.46340071.1359745362.1675131001-690834462.1675131001

from registry.k8s.io.

BenTheElder avatar BenTheElder commented on July 28, 2024

PRs are ready, #147 and kubernetes/k8s.io#4739

from registry.k8s.io.

BenTheElder avatar BenTheElder commented on July 28, 2024

kubernetes/k8s.io#4741 promoted the image. last step updating prod.

this change is safe, because even if we misconfigured a default url we will detect the content as not available on AWS and fallback to upstream registry on AR. the runtime logic diff is pretty small, mostly of the diff is in refactoring the cloud IP management and updating the runtime deployment configs to map cloud run region to default s3 region (for clients where we cannot detect a known region based on IP).

will follow-up with a prod deployment PR shortly. sandbox is running smoothly.

from registry.k8s.io.

BenTheElder avatar BenTheElder commented on July 28, 2024

kubernetes/k8s.io#4742 this is deployed

from registry.k8s.io.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.